For decades upon decades, humans have studied, romanticized, and maybe even hovered over the panic button when it comes to the concept of artificial intelligence, or AI, as a nebulous but certainly possible piece of our future. A best-case scenario, perhaps: AI helps us with our lives through organizing and applying knowledge, identifying informative patterns, automating tasks and maybe even inventing new technologies to benefit us further. A worst-case scenario, perhaps: AI backfires on us and drives mankind into a dystopia where tech fraudsters run wild before “conscious” computers seized all control, kind of like the Terminator movies or a Harlan Ellison story.
If we’re just talking possibilities, either scenario could be. What’s important for government officials and private sector leaders at various levels right now is learning how relevant this technology has become and what its capabilities are, because they’ve begun to bloom. That’s specifically true with “generative AI” programs, like ChatGPT, Midjourney and other search engines that can take user prompts and generate entirely new content—text narratives, images, videos, court filings, scientific analyses, all kinds of things—all on their own. This exists today, and it seems to be advancing by the moment.
“What’s new about generative AI, and that’s really what’s been exploding in the past couple of years … is that this is a really different form of artificial intelligence,” said Kristi Nickodem, an AI expert and distinguished professor with the UNC School of Government, while presenting on the subject to a conference of municipal attorneys recently. Generative AI has been popular with some attorneys lately because it has helped with tasks including case research or the wording of sentences in filings. According to Nickodem, the technology itself has already passed the bar exam and scored highly or perfectly on several common standardized tests that humans take.
“It’s not just trained to perform a discreet task,” Nickodem continued of generative AI. “It’s trained on massive volumes of data—whether that data is written, text or images—and then trained to replicate or reproduce patterns in that data in a way that resembles human creative content. Sometimes that content is going to be text, sometimes it will be images, sometimes it will be video. But what’s new here is it’s generating brand new content.”
For anyone needing an intro, we can turn directly to this technology, like with the aforementioned ChatGPT generative-AI search engine, to which Southern City recently posed the question: What are the pros and cons of using AI in municipal government? Here’s what ChatGPT came back with:
Pros
- Efficiency: AI can streamline processes and automate tasks, leading to increased efficiency in local government operations.
- Cost savings: By reducing the need for manual labor, AI can lead to cost savings for the government.
- Improved decision-making: AI can analyze data and provide insights that can help government officials make better decisions.
- Enhanced citizen services: AI can be used to improve citizen services by providing quick and accurate responses to inquiries and requests.
- Increased transparency: AI can help governments make data-driven decisions and increase transparency in decision-making processes.
Cons
- Job displacement: The implementation of AI in local government may lead to job loss for some workers who perform tasks that can be automated.
- Privacy concerns: AI systems may collect and analyze large amounts of data, raising concerns about privacy and data security.
- Bias and discrimination: AI algorithms can inherit biases present in the data they are trained on, leading to discriminatory outcomes.
- Lack of accountability: AI systems may lack transparency in their decision-making processes, making it difficult to hold them accountable for errors or biases.
- Resistance to change: Some employees and citizens may be resistant to the adoption of AI in local government, leading to challenges in implementation.
That content took ChatGPT seconds to generate, even as aspects of it seem born of the subjective care of a human thought process.
Given the ramifications—and cases where generative AI got it wrong; Nickodem got a rise out of the attorney’s conference by pointing out times that AI-assisted court filings included fictional or otherwise bad content—municipalities have already done what might be commonplace before long, and that is to generate in-house policies on how that city hall or town hall should interact with AI.
In March, the magazine Government Technology examined where some local governments in North Carolina are with that effort. The Town of Chapel Hill, for one, has used generative AI to help rewrite documents and policies so they’re easier for the public to understand. The magazine also quoted City of Raleigh Chief Information Officer Mark Wittenburg as saying that it’s “important for us, especially as IT leaders, to really explore what the technology can do. And then be very mindful, again, about the community, the impacts to the community, and positive and negative impacts that it can potentially have.”
Nationally speaking, top tech cities, including Seattle, Washington, home of Amazon and close neighbor to Microsoft, are projecting excitement.
“Innovation is in Seattle’s DNA, and I see immense opportunity for our region to be an AI powerhouse thanks to our world-leading technology companies and research universities,” said Seattle Mayor Bruce Harrell in a November 2023 press release about the issuance of policy for how city employees can use generative AI. “Now is the time to ensure this new tool is used for good, creating new opportunities and efficiencies rather than reinforcing existing biases or inequities.”
Seattle said its policy took six months of human work to shape. It plots the factors of responsible AI use in municipal government, including having a human employee review all AI-generated content before going live with it and limiting the use of personal data as source material for the technology.
“As a city, we have a responsibility to both embrace new technology that can improve our service while keeping a close eye on what matters—our communities and their data and privacy,” Harrell said.
Other jurisdictions have taken a different approach; the state of Maine, for one, in mid-2023 laid down a temporary but full ban on executive branch employees using generative AI in their work. The directive, from the state’s Office of the Chief Information Officer, points out that although there may be benefits with generative AI, “the expansive nature of this technology introduces a wide array of security, privacy, algorithmic bias and trustworthiness risks into an already complex IT landscape. These systems lack transparency in their design, raising significant data privacy and security concerns. Their use often involves the intentional or inadvertent collection and/or dissemination of business or personal data. In addition, generative AI technologies are known to have concerning and exploitable security weaknesses, including the ability to generate credible-seeming misinformation, disseminate malware and execute sophisticated phishing techniques.”
Generative AI has made headlines recently for being able to clone people’s voices—taking a recording of someone, analyzing it sonically and generating entirely new sentences with a digitally concocted version of that voice—a pure, easy-to-fall-for fake created by a computer. The Federal Trade Commission is now working to crack down on people who use AI to impersonate government agencies or representatives.
The potential seems all over the place and, clearly, we haven’t reached a point of convention or uniformity in dealing with it.
That’s as the concept itself continues to change and evolve—and is far from new. It’s been a favorite subject of both science and pop culture for roughly 75 years or more. Early notions came from English computer scientist Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.” The term “artificial intelligence” was coined in 1956, when computer scientist John McCarthy brought a group of top minds together to discuss the subject, deem it achievable and decide it was worth working toward. As computers developed in their abilities to process information, elements of that goal came into real view and got us to where we are today.
While the sci-fi writers might still see years of creative fuel in the subjects of a potential dystopia with AI gone wrong, or the ideal utopia of problem-solving technologies, government leaders are acting now to prevent runaway problems and harness the best opportunities. Email listservs are beginning to circulate draft policies. Information technology directors are eyeing the curve.
The national Council of State Governments, based in Kentucky, is one group compiling state-level actions on generative AI as legislatures begin to churn out bills on the subject.
“Be sure that you are staying educated on these developments,” said Nickodem. “The genie is kind of out of the bottle.”
For more with additional expert analysis from Nickodem, check out the related episode of Municipal Equation, the League’s monthly podcast, at https://municipalequation.libsyn.com.