Artificial Intelligence and Learning Through Robotics: An Interview with Circadence CTO Bradley Hayes

  • Bradley Hayes, Chief Technology Officer
  • January 08, 2019

We sat down with Circadence’s own Chief Technology Officer, Brad Hayes, to delve deeper into the meaning of AI and machine learning as it relates to the cybersecurity field, to discuss how robotics inform best cybersecurity practices, and to learn about new developments that are shaping the future of the field.

Artificial Intelligence (AI) is a phrase we hear quite often. It’s thrown around in movies and TV shows, listed as a feature in new devices we buy, and is even brought into our homes through voice services like Siri and Alexa. AI is a technology this being positioned to help us, as consumers and professionals perform traditionally complex tasks with ease. The ability to automate and augment responsibilities using robotics continues to gain traction as our digital footprints expand. And surrounding it all, cybersecurity becomes ever more critical as we seek out better ways to protect ourselves, our schools, our businesses, and national security.

Before we talk about Artificial Intelligence and machine learning, can you tell me a little more about your robotics research?

BH: The central theme of my lab’s research is building technology to enable autonomous systems to safely and productively collaborate with humans, improving both human and machine performance. The main goal is developing human-understandable systems and algorithms to create teams that are greater than the sum of their parts, outperforming the state of the art in inferring intent, multi-agent coordination, and learning from demonstration. Robotics is a foundation upon which AI and machine learning technology can be deployed with substantial impact, and it opens doors for skill building and capability expansion when we use these techniques in the context of cybersecurity learning.

Can robots help humans be more efficient?

BH:  Early robotics research focused on creating robots that would primarily occupy a purely physical role: as a force multiplier that adds physical strength, repetition, or precision to a process (like a robotic arm helping to transport material). Within the scope of earlier AI research, decision support systems were designed as cognitive assistants, helping humans make more informed choices. The next evolution of robotics research significantly synthesizes AI advancements and helps engineers and developers understand how to automate and augment processes of cognition and interaction.

The idea of machines/robots helping professionals automate and augment tasks and decision-making is interesting. Can you explain how machine learning folds into this idea?

BH:  Machine learning is a broad concept. It gets confused a lot with artificial intelligence (AI), which is more of an umbrella term.  Machine learning is a term that applies to systems that adapt based on behavior or action, while AI is descriptive of intelligence that doesn’t necessarily need to change as a function of its experiences over time.

AI and machine learning are ever-present in our lives. Route directions on Google maps, for example, use a combination of AI techniques to find a path between your source and destination while machine learning models estimate factors like traffic, time of day, and weather conditions to get you to your destination as quickly as possible. Netflix uses a tremendous amount of data, processed within their machine learning models, to predict shows that you might like. They also use these models to inform which programs they’re going to manage and create. Likewise, Pandora and Spotify use machine learning to tell you what they think you’d like to listen to. Machine learning is ubiquitous, already telling us where to go, what to see, and what to listen to. 

How does robotics relate to cybersecurity?

BH: A lot of the problems that we’re trying to tackle in the human-robot interaction research space are also echoed within the cybersecurity industry. If we want to design a robot teammate for a manufacturing task, that robot will need to be able to infer a human’s goals and intent from observation. This will let the robot perform productive actions, avoid collisions, and generally not be infuriatingly “in the way” during collaboration. Now apply that behavior to cybersecurity: Consider an autonomous agent that can infer the intent of actors on a system on your network, based on their behavior. Once those intentions are known, a defender can take steps to mitigate threats so malicious actors can’t achieve their goals. That’s a force multiplier for those defenders, making them more powerful and productive!

The relationship between the autonomous teammate and the human is especially important to cybersecurity education, as we can use learning technologies to assess a learner’s skill set and guide their progress to make them more effective more quickly. Beyond cooperative activities, we can also use these autonomous agents as opponents, providing a cost-effective means of teaching cyber professionals to react and respond to realistic attacks, forcing them to think more strategically and creatively to overcome adversaries.

Thinking about the relationship between robotics and cybersecurity, an example I often think of is when IBM’s “Deep Blue” beat Garry Kasparov at chess. People were asking: “Does this mean that computers are smarter than people? What does this mean for the future of chess?” My response is that this doesn’t mean we’re going to abandon chess, but rather that we will have new tools to train with and improve. In fact, that advancement helped spur great interest in human-machine teaming within the game of chess.

To me, the most exciting aspect of these systems is when it’s shown that a team consisting of an expert human and the AI can beat the AI by itself, suggesting that there are still aspects of the game not yet captured by the system. This example is illustrative of the fact that even in domains widely considered “solved,” the human still brings something valuable to the team.

Why does cyber learning matter to you and why is cybersecurity so important given advancements in AI and machine learning?

BH:  Cybersecurity professionals can engage in a cyber range learning environment against AI-powered adversaries and gain new insights into their approach, positively impacting threat response and mitigation. Further, they can learn to team up with AI-powered agents to accomplish tasks quicker and develop strategies to mitigate threats to defeat increasingly capable, quick, and clever opponents. Cyber learning through AI-powered intelligent tutoring is of paramount importance for providing affordable, effective, and personalized education at scale.

As we’ve been quick to inject computation into pretty much every aspect of life, the speed at which we’ve deployed these systems has come at a cost. At this stage, I would consider it a debt, as there is a tendency to deploy systems without properly safeguarding them and/or ensuring that they’re reliably operational under potentially adversarial operating conditions.

Further, cybersecurity doesn’t just mean being able to defend against intentional adversaries, but also against unintentional consequences stemming from benign actions from people we trust. In any case, the attack surface grows rapidly as points of interaction grow in number. Because of this, I don’t foresee a viable strategy that doesn’t heavily involve the use of AI and machine learning for cybersecurity professionals, both in terms of learning and continuing education, but also in terms of effective coordination against increasingly capable adversaries.

These concepts are important to know and understand as government, enterprise, and academic institutions look to keep pace with the evolving threat-scape and prepare the next generation of cyber professionals. To learn more about how Circadence is at the forefront of cybersecurity learning tools at https://circadence.com/.