Expanding Capabilities for Policy Development
Generative AI can assist in creating policy drafts, analyzing vast amounts of data to identify trends and patterns, and simulating the potential outcomes of different policies. This can lead to more informed and effective policy decisions.
Enhancing Service Delivery Outcomes
AI-powered tools can help agencies predict service needs, optimize resource allocation, and personalize services for citizens. For example, a city’s transportation department could use generative AI to predict traffic patterns and adjust public transit schedules accordingly.
Improving Internal Workings of Government
Generative AI can streamline internal processes, such as document management, email sorting, and meeting scheduling. This can lead to significant time savings and increased efficiency.
Streamlining Regulation Development, Compliance, and Reporting
AI can assist in drafting regulations, ensuring compliance with existing laws, and generating reports. This can reduce the burden on government employees and improve the accuracy of regulatory processes.
Accelerating Whole-of-Government Strategies and Policies
Generative AI can facilitate collaboration and information sharing across different agencies, leading to more cohesive and effective policy development.
The Cybersecurity Imperative
While generative AI offers significant benefits, it also poses new cybersecurity challenges. The data of citizens and the government need to be protected, as private data can reach a much wider audience than other typical data loss avenues. ChatGPT and other generative AI platforms save data, such as chat history, to train and improve their models. This means input data could be used to train the models and potentially exposed later to other users.
The Risks of Generative AI
Generative AI platforms have lowered the barriers for hackers to launch more sophisticated and effective phishing attacks. These tools can produce very persuasive and correct writing in many languages, making it easier for bad actors to craft convincing scams. Additionally, the use of generative AI can lead to data loss, as sensitive information might be inadvertently shared or leaked.
The Executive Order on AI
On October 30th, 2023, the President published an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This order emphasizes the need for safe and responsible use of AI, with a key focus on generative AI. The order discourages broad general bans or blocks on agency use of generative AI but urges agencies to put appropriate safeguards in place. This approach allows for experimentation and routine tasks that carry a low risk of impacting Americans’ rights.
Securing Generative AI in Agencies
Agencies can securely enable generative AI by adopting a multi-faceted approach that balances innovation and security. Here are some key strategies:
A Layered Approach for Data Loss Prevention
Most organizations will adopt data loss prevention (DLP) policies as guardrails for generative AI. However, DLP alone is not enough due to the varied avenues through which users input data. Agencies must adopt a layered approach with new capabilities that address the specific ways generative AI platforms are utilized.
Protection on a Group Level vs. Domain Level
When adopting technology as a safeguard for generative AI, it’s crucial to enable policies at a generative AI group level rather than on a domain-by-domain basis. This approach ensures consistency and reduces the burden on security and IT teams, who must constantly update policies as new generative AI platforms emerge.
Protection from Internet-Borne Threats
As agencies use generative AI to improve processes, bad actors are using the same technology to enhance their attacks. Agencies need to adopt technology that protects against internet-borne threats, ensuring that users and data remain secure regardless of the sophistication of bad actors’ tactics.
Menlo Security: The Trusted Partner for Secure Generative AI
Menlo Security is a leading name in browser security, enabling agencies to safely adopt generative AI and protect against both data loss and phishing attacks. Menlo Security offers several features to safeguard generative AI use:
Data Loss Prevention (DLP)
Menlo Security DLP controls the data input into ChatGPT and other generative AI tools, preventing sensitive information from being shared or leaked.
Copy & Paste and Character Limit Controls
These controls restrict the amount of data that can be copied and pasted into generative AI platforms, reducing the risk of data loss.
Browser Forensics
Menlo Security’s browser forensics capabilities provide real-time monitoring and analysis of browser activities, helping to detect and prevent potential security threats.
Conclusion
The future of public sector innovation lies in the responsible and secure adoption of generative AI. While this technology offers significant benefits, it also poses new cybersecurity challenges. By adopting a layered approach to data loss prevention, enabling policies at a group level, and protecting against internet-borne threats, agencies can securely enable generative AI. Partners like Menlo Security can provide the necessary tools and expertise to safeguard this transformative technology.
FAQ
What is generative AI?
Generative AI is a type of artificial intelligence that can create new content, such as text, images, or music, based on the data it has been trained on. This technology is used in various applications, from chatbots to creative tools.
How can generative AI benefit the public sector?
Generative AI can benefit the public sector by improving policy development, enhancing service delivery, streamlining internal processes, and accelerating whole-of-government strategies. It can also help agencies make better and faster decisions, leading to more effective public services.
What are the cybersecurity risks associated with generative AI?
The cybersecurity risks associated with generative AI include data loss, phishing attacks, and the potential exposure of sensitive information. As generative AI platforms save data to train and improve their models, there is a risk that this data could be exposed to other users.
How can agencies securely enable generative AI?
Agencies can securely enable generative AI by adopting a layered approach to data loss prevention, enabling policies at a group level, and protecting against internet-borne threats. Partners like Menlo Security can provide the necessary tools and expertise to safeguard this technology.
What is the Executive Order on AI?
The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, published on October 30th, 2023, emphasizes the need for safe and responsible use of AI. It discourages broad general bans or blocks on agency use of generative AI but urges agencies to put appropriate safeguards in place.

Leave a Comment