ai.png

Designing for AI

AI (1).png

Designing for AI

UX RESEARCH / UX

 

Artificial intelligence introduces a unique use case for cybersecurity. AI designers must consider the following questions: What pain points does AI solve for our end users? And, more importantly, what pain points does AI in cybersecurity introduce? On QRadar Advisor with Watson, we aimed to answer these questions.

Contributors: Andi Lozano, Terra Banal


Context

In a 3-day workshop, QRadar Advisor and a team of architects, developers, offering managers, and designers got together in a room filled with white boards to discuss a new feature using AI. When a distinguished designer asked us what our intent for artificial intelligence was, the room fell silent.

Having an intent for AI leads teams to focus on an actual user need, and eliminates the “just put AI in it” mentality.

The problem — our intent

Security analysts are overwhelmed with the amount of information required to do their jobs efficiently and effectively. Significant amounts of security alerts are overlooked or ignored for “higher risk” events. Junior security analysts are often new to the industry and workforce, and simply do not have the skills to detect and hunt for threats.

On top of the skilled staff shortage in the cybersecurity industry, security information and event management (SIEM) platforms are often too complex to tune rules properly to trigger the right security incidents for investigation, resulting in a noisy environment and lots of false positives. Advisor’s AI at the time of the workshop sought to bring together correlated data, but analysts found the tool to be overwhelming without providing evidence into why Advisor thought an incident was of high concern or risk.

brainstorm.png
It’s a security analyst’s job to be really skeptical, and if we provide no evidence for the insights we give them, why would they even trust it?
— me, lamenting to my product team about ethical AI

Understand

Using IBM Design’s point of view on AI, we sought to produce five areas of ethics within the AI experience:

  1. Accountability

  2. Value alignment

  3. Explainability

  4. Fairness

  5. User data rights

These focal areas provide an intentional framework for establishing an ethical foundation for building and using AI systems. With these five ethics in mind, we knew our goal would be to reduce the analyst’s investigative queue and know, within minutes, if a security incident was a false positive or not.

Design Concepts

The product used multiple types of AIs, but we quickly understood that users don’t care. They refer to the whole offering as “Watson” or “artificial intelligence” even when parts of the product may not be so. I sought to understand the whole system flow by creating a conceptual model (IBM confidential). The model showed the end-to-end user experience in conjunction with the system architecture. The goal was to understand when an analyst would like to see an AI’s prediction on a security incident.

POC model

As we dove deep into creating the end-to-end flow, our data scientists came to us with good and bad news. Good news: the model was high-performing and extremely accurate. Bad news: we needed more data. AI is like a child and only understands what you give it. In the beginning stages of building a model, it does not obtain deep learning algorithms— it needs to be supervised. If our AI said that a real security incident was a false positive but was actually the source of a data breach, we’d have a lawsuit on our hands. This was a risk we were not willing to take. Before the AI could be released “into the wild”, we needed human feedback in a real security environment to tell the AI single-handedly what the security incident was: a false positive, or not a false positive.

Introducing… IBM Security’s first AI feedback loop

IMG_1486.jpeg

Through a sponsor user program, we released a Beta version on the artificial intelligence model and gave it to a small group of sponsor users. The users then trained the model with human-to-computer feedback straight within their SIEM (security information and event management) platform. This gave us insight into where the feedback loop component should live within the UI and when to ask the analyst for their evaluation of a security incident.

Key insights

Explainability
Explainability to a data scientist means something different to an analyst. Analysts don’t seem to care how an AI works, rather they’d like to know why the AI came to the determination it did.

Trust
If we do not provide supporting evidence as to why the AI said a security event was a false positive or not, then analysts will not trust it and will, therefore, not use is.

Improvement over time
Analysts believed that out-of-the-box, the AI was immature and they were willing to train it to improve accuracy.

Looking forward

As we say, everything is a prototype and in this process we learned so much about how we could weave ethical AI into the fabric of our security offerings. Although we had weekly syncs with our data scientist while the model was being built, I think we could have been more involved during the model training process and been a louder advocate for explainability.

I like how you take initiative in collaborating both within and across teams and I’ve tried to take that back to my team and implement it. I also like how you’ve kept an open mind throughout our interactions and appreciate your willingness to take risks and work with me on critical data requirements that I’ve thrown at you during the course of our project.
— Data scientist, IBM Security