There is always a lot of fear during times of great change, and now is a time of unprecedented change. There are concerns that technological advances threaten the existence of humanity itself, or will at least adversely affect the quality of life in the future.
These concerns are nothing new. Let us go back to the year 1920 — that's when the term "robot" was coined. Do you know what the plot of the first story about Robots was?
The Czech play "Rossum's Universal Robots" — which coined the word "Robot" — is a tale of Robots who rise up against humans and cause our extinction. Since that time, this theme has been played out over and over again in Science Fiction — "The Terminator" series, the film "I, Robot", the book "Robopocalypse", and many more.
Other science fiction stories tell of worlds where everyone is under constant surveillance (1984, "We") or where police arrest people before they commit a crime ("Minority Report").
Here we are in 2023 and we are actually starting to see technological advances that make these kinds of dystopian futures seem possible:
Robots in the workplace
Self driving cars
Mind reading technology
Deep fakes
Omnipresent Surveillance
Artificial Intelligence
These advances are neither good nor evil. As with anything, how humans may use this technology is the issue. I recently attended the IAPP conference "Privacy, Security, and Risk" and there was a lot of buzz about the latest developments in tech. There were also some excellent points made about how we can regulate such technologies. Here I share some of the highlights from this conference.
Trust and Safety Allow Innovation to Move Faster
Trevor Hughes, IAPP President, started off the Conference with a great keynote. He spoke about the early days of the automobile. There were many concerns around the safety of the automobile, so much so that in the UK there was a "red flag" law that stated that all such self propelled vehicles ("locomotives") must be preceded by a person on foot with a red flag to warn of the approaching locomotive. There were even early "memes" about the "automobile fiend" who is deadly and only cares about speed.
What was it that allowed cars to go fast (and without flaggers leading the way)? It was the invention of brakes. With adequate brakes, a vehicle could now proceed quickly because it could safely stop.
Brakes allow cars to go fast!
Just as there were concerns about the first cars, there are concerns about the risks of AI, neurotechnology, and other advances. This does not mean we can't manage the risks, or that we are doomed to a dystopian future. Now is the time for us all to work together on the "brakes" — or safeguards — that will allow us to safely use these technologies.
Mind Reading
Nita Farahany, author of "The Battle for Your Brain", spoke about neurotechnology that is being developed now and which has already come a long ways. Sensors placed on your head can be used to detect brain patterns. This has a great many positive applications, including helping those with a variety of medical problems.
She shared an example of a hat which could be worn by employees to trigger an alert whenever their alertness is waning. This has the potential to virtually eliminate error for pilots, heavy equipment operators, and others who must remain sharp on the job.
But what if these sensors are able to read brain activity in great detail? It has recently been demonstrated that by pairing an AI-based decoder with MRI scan data it is possible to piece together what someone is thinking! What if that hat that ensured you were alert, also saved all of this raw brain wave data? That data could potentially be used by management to determine what you are thinking — are they satisfied with their pay? Are they considering looking for a new job? Or if the data were stolen, perhaps hackers could blackmail you — we will release your most private thoughts unless you pay up!
A good design which respects privacy would ensure that only the conclusions (such as alertness level) are stored, and all of the raw brain data is securely wiped from the device.
Without appropriate respect for the privacy of our thoughts, here come the Thought Police of 1984, and a host of other scary possible futures.
Does that mean such technology should just be banned? No. There are a great many benefits to these technologies as well. There simply needs to be laws, regulations, and clear guidance on how to develop these technologies while still respecting privacy and other fundamental human rights.
Appropriate regulations and oversight can work together to minimize risks, but the time to start thinking about this is NOW - during the infancy phase of these technologies. Consider how much privacy was lost — yet to be regained — from the meteoric rise of social media, search engines, and targeted advertising platforms.
Facial Recognition and Privacy
Kashmir Hill, author of "Your Face Belongs to Us," spoke about an interesting use of facial recognition. The owner of Madison Square Garden was upset by a lawsuit brought against him related to an injury at an event hosted in the Garden. He decided that he was not going to allow lawyers working for the firm that was suing him into any events at the Garden. He was able to make this a reality because they were already using facial recognition at all events to ensure that troublemakers aren't allowed access to the venue. He simply had all of the photos from the Law firm's website added to the database.
This is legal, but it highlights how far facial recognition has come. Coupled with the large amount of photos available online, there is the potential for a lot of abuse. While the government and law enforcement have many restrictions on the use of such data (and oversight) there really is little protecting us from corporations we do business with. We either put up with it, or we take our business elsewhere.
Deep Fakes
Peter Warmka of the Counterintelligence Institute gave a great talk about human hacking.
A common scam is for fraudsters to call a grandparent and pretend to be one of their grandchildren in trouble. The phone call is frequently spoofed so that it appears to be coming from a prison or police station. The caller asks for money to get bailed out but cautions "please don't tell Mom and Dad." All too frequently, Grandparents fall for this scam.
In recent years this scam has become even more convincing because of the ease with which deep fakes can be created. Someone only needs to get a little bit of audio of your loved one speaking and now they can use that to speak with that person's voice!
Deep Fakes Make for Effective Social Engineering
Now imagine what could be done in your workplace. The criminal may start by looking for videos of company executives online to gain video and audio that can be used to create a deep fake.
They choose a few targets, research the companies through publicly available sources, and scrape social media to learn about the scuttlebut. They use LinkedIn to find the names of your Finance and Accounting staff, and then look up their cell phone numbers on a site like TruePeopleSearch.
Next, the criminal creates a plausible script based on all of this information. Perhaps your company recently moved into a new office — then it would be plausible that the CEO would be requesting finance to make payment to an office furniture company.
Finally, the criminal then calls someone in Accounting from a spoofed phone number and — speaking with your CEO's voice — requests a wire transfer be made to the office furniture vendor. "I will forward you the invoice with payment information. They are a bit upset about the delay and contacted me directly - please send payment immediately." Wow - there are few people who would not fall for this.
How are you protecting your organization from deep fakes?
Here are some initial suggestions:
Raise awareness — Train staff about common scams and how deep fakes can make them really convincing.
Call Back — Always require that Staff call the person back at the phone number in the Corporate directory before making any wire transfers or revealing confidential information.
Manage Social Media. Create a social media policy and educate your staff about what should and shouldn't be posted online. Highlight ways they can protect themselves - it's not just about your organization's concerns.
Privacy Assistance. Consider assisting your staff to maintain their privacy online - many people do want to be removed from these public databases but might not know how. This will be a protection for your employee and for you.
Got Privacy?
A lot more was discussed at the Conference of course — the latest State Privacy laws, ethics, incident management, cyber resiliency, digital advertising and privacy law, and much more. Check out the "US State Privacy Legislation Tracker" here on the IAPP website to see how US States are taking steps right now to enact legislation to protect Privacy.
How are you and your organization keeping up with the plethora of new Privacy Laws?
Need the assistance of a privacy, security, and compliance expert? Contact me via email (Justin@ArmstrongRisk.com) for a free virtual meeting.
References
These were the books whose authors presented at the Conference.
"The Battle for Your Brain - Defending the Right to Think Freely in the Age of NeuroTechnology" by Nita Farahany
"Your Face Belongs to Us - A Secretive Startup's Quest to End Privacy as We Know It" by Kashmir Hill
"The Equality Machine - Harnessing Digital Technology for a Brighter, More Inclusive Future" by Orly Lobel
Related articles:
Disclaimer: The information provided here (“material”) is intended for informational purposes only and does not constitute legal or professional advice. This material is not warranted to be exhaustive or complete. Additionally, every organization has a unique set of circumstances, business requirements, contractual obligations, and regulatory compliance requirements which we are unaware of. No guarantee is made that use of this material will secure your organization and help you to meet your compliance obligations.
Comments