Cyber Security and the Baby Boomer, Gen X Populations

We all have someone in our lives who isn’t tech-savvy They don’t know how to convert a word doc into a PDF, or they try to do a Google search on Facebook, or they seem to struggle with the ‘simple’ act of text messagingThese are not uncommon missteps when using smart devices for people who didn’t grow up with Siri ® (let alone the Internet!) at their fingertips. While these mistakes seem harmless or even comical at times, there can be much more serious cyber security consequences.  

Baby Boomer and Generation X populations (born 1946-64 and 1965-76) are a growing target for scammers because they are a largely trustworthy population made up of financially successful people. And some of the oldest may have cognition and memory ailments. The American Journal of Public Health estimates that about 5of the Baby Boomer population, (about 2 to 3 million people)experience from some sort of scam every year. The Federal Bureau of Investigation cites that older adults lose more than 3 billion dollars a year to financial scams. 

Some of the most common forms of cyber threats that vulnerable Baby Boomers can fall victim to are impersonation scams, or fraud. This is a kind of deception involving trickery and deceit that leads unsuspecting victims to give money, property, or personal information in exchange for something they perceive as valuable or worth protecting. According to Scam Watch, in 2019 so far 10,297 scams have been reported in the 55-64 age range, and 13,323 scams have been reported in those 65 and older.  

Here are some of the top types of scams used against this population: 

  • Medicare, health insurance, and pharmacy scams in which perpetrators may pose as a Medicare representative or provide bogus healthcare services for patients in order to gain access to their personal information. They may also be persuaded to buy unsafe or fake prescription medication that may harm their health. 
  • Sweepstakes and lottery fraud occur when an advertisement pops up saying you’re the lucky winner in a random website sweepstakes. This is a ploy to get people to enter their personal information, including address and credit card number in order to “claim a prize” or win money.
  • Sweetheart scams seem unusually cruel. With a majority of the Baby Boomer population dealing with the death of a loved one or children leaving home, maybe living alone for the first time, loneliness can creep in. Scammers in these scenarios pretend to be a love interest of the victim and eventually ask for money to help support them. 

The good news is that we can help the most vulnerable in this population avoid falling victim to a scamWe can have conversations to stimulate awareness of online and phone safety practices, make frequent visits and facilitate discussions about monthly bills and medications, and destigmatizing fear or embarrassment to come forward if they find they have been taken advantage of (waiting to rectify the situation could only make things worse). You can report scams to a number of organizations, including the FBI, Social Security Administration, Federal Trade Commission, or your bank or retirement facility. 

 Don’t wait until it’s too late, have important conversations with loved ones of all ages and ensure they feel empowered to make smart decisions online. 

Webinar: How Gamification, Artificial Intelligence, and Reinforcement Learning Will Revolutionize Cyber Skill Acquisition

Increase Cyber Skills with Gamification

Join Bradley Hayes, Circadence Chief Technology Officer and Assistant Professor, University of Colorado Boulder Engineering and Applied Science, to learn how you can leverage AI to enhance the cyber competency and abilities.

Where offensive, defensive and forensic security tools end, Bradley will share how human analysts can leverage AI agents to anticipate, mitigate, and prevent tomorrow’s threats.

What You’ll Learn

  • Why there is a need for continuous access to dynamic skill-building opportunities using AI and game mechanics
  • How cyber professionals can automate and augment their workloads using AI-powered collaborators
  • How reinforcement learning settings positively impact timely cyber reaction and response.How Gamification, Artificial Intelligence, and Reinforcement Learning
    Will Revolutionize Cyber Skill Acquisition

How gamification, AI, and Reinforcement Learning will Revolutionize Cyber Skill Aquisition

Increase Cyber Skills with Gamification

Join Bradley Hayes, Circadence Chief Technology Officer and Assistant Professor, University of Colorado Boulder Engineering and Applied Science, to learn how you can leverage AI to enhance the cyber competency and abilities.

Where offensive, defensive and forensic security tools end, Bradley will share how human analysts can leverage AI agents to anticipate, mitigate, and prevent tomorrow’s threats.

What You’ll Learn

  • Why there is a need for continuous access to dynamic skill-building opportunities using AI and game mechanics
  • How cyber professionals can automate and augment their workloads using AI-powered collaborators
  • How reinforcement learning settings positively impact timely cyber reaction and response.

DeepFake: The Deeply Disturbing Implications Behind This New Technology

DeepFake is a term you may have heard lately. The term is a combination of “deep learning” and “fake news”. Deep learning is a class of machine learning algorithms that impact image processing, and fake news is just that – deliberate misinformation spread through news outlets or social media. Essentially, DeepFake is a process by which anyone can create audio and/or video of real people saying and doing things they never said or did. One can imagine immediately why this is a cause for concern from a security perspective.

DeepFake technology is still in its infancy and can be easily detected by the untrained eye. Things like glitches in the software, current technical limitations, and the need for a large collection of shots of other’s likeness from multiple angles in order to create fake facial models can make this a difficult space for hackers to master. While not a security threat now, given how easy it is to spot manipulations, the possibility of flawless DeepFakes is on the horizon and, as such, yields insidious implications far worse than any hack or breach.

The power to contort content in such a way yields a huge trust problem across multiple channels with varying types of individuals, communities, and organizations: politicians, media outlets, brands and consumers just to name a few. While the cyber industry focuses on the severity of unauthorized data access as the “problem,” hackers are shifting their attacks to now modify data while leaving it in place rather than holding it hostage or “stealing” it. One study from Sonatype, a provider of DevOps-native tools, predicts that, by 2020, 50% of organizations will have suffered damage caused by fraudulent data and software, while another  report by DeepTrace B.V, a company based in Amsterdam building technologies for fake video detection and analysis, states, “Expert opinion generally agrees that Deepfakes are likely to have a high profile, potentially catastrophic impact on key events or individuals in the period 2019-2020.”

What do hackers have to gain from manipulated data?

  • Political motivation – From propaganda by foreign governments to reports coming from an event and being altered before they reach their destination, there are many ways this technology can impact public perception and politics across the globe. In fact, a quote from Katja Bego, Senior Researcher at Nesta says, “2019 will be the year that a malicious ‘deepfake’ video sparks a geopolitical incident. We predict that within the next 12 months, the world will see the release of a highly authentic looking malicious fake video which could cause substantial damage to diplomatic relations between countries.” Bego was right about Deepfake being introduced to the market this year, so we will see how it develops in the near future.

 

  • Individual impacts –It’s frightening to think that someone who understands this technology enough could make a person do or say almost anything if convinced enough. These kinds of videos if persuasive enough, have far reaching impacts on individuals, such as relationships, jobs, or even personal finances. If anyone can essentially “be you” through audio or video, the possibilities of what a hacker could do are nearly limitless.

 

  • Business tampering – While fraud and data breaches are by no means a new threat in the business and financial sectors, Deepfakes will provide an unprecedented means of impersonating individuals. This will contribute to fraud in traditionally “secure” contexts, such as video conferencing and phone calls. From a synthesized voice of a CEO requesting fund transfers, to a fake client video requesting sensitive details on a project, these kinds of video and audio clips open a whole new realm of fraud that businesses need to watch out for.

While the ramifications of these kinds of audio and video clips seem disturbing, DeepFake technology can be used for good. New forms of communication are cropping up, like smart speakers that can talk like our favorite artists, or having our own virtual selves representing us when we’re out of office. Most recently, the Dalí Museum in Florida leveraged this technology to create a lifelike version of the Spanish artist himself where visitors could interact with him. These instances show us that DeepFake is a crucial building block in creating humanlike AI characters, advancing, robotics, and widening communication channels around the world.

In order to see the benefits and stay safe from the threats, it is no longer going to be enough to ensure your security software is up to date or to create strong passwords. Companies must be able to continuously validate the authenticity of their data, and software developers must look more deeply into the systems and processes that store and exchange data. Humans continue to be the beginning and ending lines of defense in the cyber-scape, and while hackers create DeepFakes, the human element of cyber security reminds us that just as easily as we can use this technology for wrongdoing, we have the power to use it to create wonderful things as well.

Photo by vipul uthaiah on Unsplash

When cyber security meets machine learning

What happens when cyber security and machine learning work together? The results are pretty positive. Many technologies are leveraging machine learning in cyber security functions nowadays in order to automate and augment their cyber workforce. How? Most recently in training and skill building.

Machine learning helps emulate human cognition (e.g. learning based on experiences and patterns rather than inference) so autonomous agents in a cyber security system for instance, can “teach themselves” how to build models for pattern recognition—while engaging with real human cyber professionals.

Machine learning as a training support system

Machine learning becomes particularly valuable in cyber security training for professionals when it can support human activities like malware detection, incident response, network analysis, and more. One way machine learning shows up is in our gamified cyber learning platform Project Ares, under our AI-advisor “Athena” who generates responses to player’s queries when they get stuck on an activity and/or need hints to progress through a problem.

Athena generates a response from its learning corpus, using machine learning to aggregate and correlate all player conversations it has, while integrating knowledge about each player in the platform to recommend the most efficient path to solving a problem. It’s like modeling the “two heads are better than one” saying, but with a lot more “heads” at play.

Machine learning as an autonomous adversary

Likewise, machine learning models provide a general mechanism for organization-tailored obscuring of malicious intent during professional training—enabling adversaries to disguise their network traffic or on-system behavior to look more typical to evade detection. Machine learning’s ability to continually model and adapt enables the technology to persist undetected for longer (if it is acting as an autonomous agent against a trainee in our platform). This act challenges the trainee in the platform in a good way, so they begin to think like an adversary and understand their response to defensive behavior.

Machine learning supports cyber skills building

Companies like Uber use machine learning to understand the various routes a driver takes to transport people from point A to point B. It uses data collected to recommend the most efficient route to its destination.

It increases the learning potential for professionals looking to hone their cyber skills and competencies using machine learning.

Now imagine that concept applied to cyber training in a way that can both help cyber pros through cyber activities while also activating a trainee’s cognitive functions in ways we previously could not with traditional, off-site courses.

Machine learning abilities can analyze user behavior for both fraud detection and malicious network activity. It can aggregate and enrich data from multiple sources, act as virtual assistants with specialized knowledge, and augment cyber operators’ daily tasks. It’s powerful stuff!

To learn more about machine learning and AI in cyber training, download our white paper “Upskilling Cyber Teams with Artificial Intelligence and Gamified Learning.”

Photo by Startup Stock Photos from Pexels