Smart City Panel Discussion at the Global Cybersecurity Forum in Riyadh
In February, I had the pleasure to participate on a smart cities panel at the Global Cybersecurity Forum in Riyadh. It’s a fitting place for such a discussion, since Saudi Arabia is home to the most ambitious smart city project ever. Announced in 2017, with construction slated to begin this year, Neom represents an estimated $500 billion experiment in building the city of the future.
The panel featured expert perspectives representing both the private sector, academia, and representatives from the Neom project. The core of the discussion revolved around security and privacy, and the role each player in the ecosystem (across government, academia, private sector, policy makers, etc.) needs to play in order to ensure secure smart urban services are built to respect privacy by design.
We all generally agreed that data and privacy both continue to play fundamental roles for cities in the future—even more than they do now. Building the critical infrastructure necessary to deliver reimagined smart services on such a massive scale can’t work without understanding data and the privacy issues inherently linked to it—how it’s collected, shared and used by those that gather it. We need to explore how to build data systems where data never has to leave the individual and processing occurs where the data is generated and resides. In that realm, many complex issues remain unsolved and need attention from the academic community.
We have to assume that any data gathered by smart city services has the potential to be compromised or abused in some form. There are interesting problems to be solved around data. Determining data ownership is one problem—especially when issues like how best to gather consent to collect it from users or how to monetize that data are being decided. For example, when you walk through a park, do you consent to be on camera? How would a system look that allows for an individual to opt in?
Beyond the conversation of data and its governance, there are challenges around cyberattacks. Smart cities are a fertile breeding ground for new types of attacks. New services, critical systems, large attack surfaces, etc. will create lots of new attack vectors, vulnerabilities and new types of exploitations. In the traditional security world, we’d heavily monitor the smart cities and learn which vulnerabilities attackers focus their efforts on. The problem with this approach, just like the overall cybersecurity industry in general, is that it’s not working. We are always going to lag behind the attackers and are leaving our systems vulnerable to attacks until we have a chance to catch up with the adversaries and understand their latest methods in targeting attacks on smart cities.
A shift in thinking is required here. Instead of a reactive approach, it’s time to move left of breach into a proactive, behavior-centric approach. We need to understand the rhythm of people in the cities and the movement of data to understand ‘normal’ behavior in order to flag real-time malicious outliers. Establishing this baseline is a huge focus for my team; at its core, it’s understanding complex human factors and their impact on cyber risk.
Later in the panel, we talked about the need for a paradigm shift to deal with a wave of new technologies coming that will greatly expand the scope of attack surfaces that will require new defense solutions to secure against. 5G and IoT are two examples of technology that will greatly impact smart cities. And we can count on more disruptive technology in the future—that’s something that will likely never change. The one constant is the behavior of entities—both of users and their devices. When we truly understand that, we’ll be much better equipped to deal with an ever-changing technology landscape.
The panel also spent a fair amount of time on the topic of artificial intelligence (AI) and the use of automatic decision making. How much trust do we place into automated decisions? How do we verify that an algorithmic approach will continue to function as expected during times of distress? In my view, algorithms can be dangerous. We need to focus on creating verifiable, explainable systems. Think about critical systems in the food supply chain or in building controls. We want to make sure that fresh air and building temperatures stay within levels that are comfortable for humans. What if a supervised machine learner was used to control the building; would you trust that black box to control the building? How can we establish a level of trust? One potential approach: algorithms could be centrally audited. Anything being put into production in a smart city should be subject to certain certifications. How we measure the efficacy and inner workings of algorithms is another huge topic that will require years of research.
Massive monetary investments alone won’t guarantee success in building smart cities of the future. While advancements in technology enable more possibilities, they also introduce new levels of risk. Success will require collaboration across all levels of expertise. Many complex issues remain to be solved. However challenging, it’s an incredibly interesting and important topic to advance in the field of cybersecurity.