August 21, 2023
Your Most Important Users May Not Be Human at All
Content is not just words and pictures. It’s data. It has to be read and understood by your human constituents, of course. But increasingly, your most important users may not be human at all. Bots consume your content, and soon bots may create it too.
Artificial intelligence is here, and we are only beginning to comprehend how it will change our lives and how we work. AI is ubiquitous in today’s news headlines, and it comes up in practically every professional conversation. Some form of AI will be part of every experience and in every piece of technology we use.
Technological advancements can be a great benefit to the workplace and society. It’s how we choose to use it. ChatGPT, a software application first released by its developer OpenAI in November 2022, wowed users and accelerated the rest of the tech industry’s in-progress AI projects. While concerns about accuracy, algorithmic biases, plagiarism, and even existential threats abound, the AI genie is out of the bottle. But we still have the opportunity to frame how we integrate it into our work and our lives. The train is unstoppable, but the destination is not inevitable. We have a say in how we transition to AI-augmented work; how we harness its capabilities to serve our constituents better; how we hold true to our values; how we protect human creators from exploitation; and how we safeguard our fragile information ecosystem.
The train is unstoppable, but the destination is not inevitable. We have a say in how we transition to AI-augmented work; how we harness its capabilities to serve our constituents better.
Machine Learning 101
First, let’s get up to speed on the terminology.
- Artificial Intelligence – The broad term referring to machines or systems that think and act like humans. The idea of artificial intelligence has captured the imagination of humans since antiquity and features prominently in art and writing throughout history. From ancient Greek mythology to Frankenstein’s monster to HAL 9000, we have told stories of automatonic “beings” that challenge us to confront what it means to be human or provide lessons in hubris.
- Machine Learning – A subset of A.I. where machines are taught to learn or get better over time by continually evolving their calculations as more data becomes available.
- Large Language Models – Manmade neural networks that mimic the way our human brains process information. Systems take in vast amounts of text and other data, capturing the syntax and semantics of human language, recognizing patterns, and developing statistical calculations.
- Generative Pre-trained Transformer – The GPT in ChatGPT is a specific large language model that can generate what seems to us – what we perceive to be – original content based on how it’s been pre-trained with phenomenally large datasets, as well as some manual programming. It not only correlates data points to produce text. GPT technology also can produce images that look remarkably like real photographs. OpenAI’s image generator is called DALL-E.
AI systems are now better than humans at many things. Handwriting recognition, speech recognition, image recognition, reading comprehension, and language understanding are but a few. These GPT models are also learning exponentially every day as they take in more data.
Within 2 months of its launch, ChatGPT hit 100 million users. The rate of adoption is astounding. The biggest tech companies, such as Google, Microsoft, and Meta, scrambled to roll out their own GPT-enhanced products as quickly as possible so that they didn’t get left behind. There’s some fear, even among tech leaders, that a careless rush to compete for market dominance may produce some kind of runaway AI that poses existential risks for humans on par with a nuclear incident. Federal and state governments have been slow to respond with legislation, but committees in the House and Senate held hearings this summer to discuss its impact on human rights, intellectual property, military readiness, scientific advancement, homeland security, and other issues.
What To Watch Out For
- Misinformation: Some misinformation is intentional, but often it’s inadvertent. There’s a lot on the internet that’s meant to be parody, and we’ve seen how gullible humans are when information is presented as fact and not labeled as a joke. Computers have an even less sophisticated sense of humor.
- Hallucinations: When ChatGPT generates words and phrases, it’s doing it based on statistical correlations from all the words and phrases it’s been trained on. Sometimes the string of words it produces seem confidently correct but are, in fact, horribly wrong. You can’t trust everything you read that’s written by AI.
- Deepfakes: Images and audio that are AI generated but look and sound nearly indistinguishable from the original. You also can’t trust your eyes and ears.
- Cybersecurity: Generative coding opens up new avenues for hackers to breach advanced security software. Many in the AI community have raised alarms about how fast systems are learning – with no guardrails.
- Bias: All of the data we’ve fed the AI systems comes from humans. And we know humans – even if it’s unintentional – have pervasive biases around race, sexuality, gender, and culture.
- Intellectual property: Congressional hearings in June put the spotlight on intellectual property concerns. There are ethical and legal concerns about sampling someone else’s work to create a wholly new piece of work that supplants it.
Content Governance
There are reasons to be concerned, even while being excited about the incredible opportunities AI provides. It’s becoming clear that AI-augmented work will become the norm in many industries, including tech, media, legal, marketing, education, finance, graphic design, accounting, and customer service.
Instead of staring at a blank page, we can use ChatGPT to write a rough draft that can help get us going. For those of us analyzing data to make important decisions, AI can interpret the data in seconds. As it becomes more commonplace in our daily work, how can we ensure our use is ethical, responsible, and successful? Here are 7 ways, as first presented during GovTalks in May:
1. Be Transparent
We should never mislead our constituents by presenting AI-generated images or content without disclosing their origin. In addition, while it is acceptable to use AI as a tool when the end product is mostly created by us or is verified by us, it is unethical to misrepresent the work that we create.
We must also be accountable for our actions and take ownership of the information we produce, products we create, and policy decisions we make.
AI can assist us in countless ways, however, there is a risk of relying too heavily on AI as a substitute for quality human labor instead of using it as an enhancement. Ultimately, we are accountable to the public and cannot shift blame to robots if something goes wrong.
Some jurisdictions have released guidelines advising government employees that searches and prompts used to generate content in a tool like ChatGPT may be subject to open records laws.
2. Don’t Pollute the Waters
“Hallucinations,” where ChatGPT returns information that seems factual but isn’t, is a commonly reported problem. In some cases, it’s because the material used to train the bot was incorrect. More than ever, it is important for government website managers to make sure our content is updated with the most accurate and timely information, scrubbed of clutter, and written in plain language. Our websites feed the content ecosystem.
Your website should be the single source of truth for your organization. Do not maintain separate knowledge bases that must be kept in sync. This opens up the chance that there will be incorrect or inconsistent information floating around.
3. Think of the Bots!
Your users are no longer just human. Bots are now a crucial part of web consumption. Don't let your content get lost in translation. Instead of consuming information solely on your website, users may encounter it in other contexts as well. Make sure the bots know what they're looking at. Fill out metadata fields accurately and completely to ensure the best possible performance from your chat bot. After all, the quality of your bot's output is directly tied to the information you provide it.
4. Get a .Gov Domain
Another step we can take is getting a .gov domain. This provides an extra layer of authority in terms of SEO as well as increasing the weight given by various AI tools. Having a .gov domain gives your information credibility with both people who view it online as well as algorithms used by search engines or other systems that use artificial intelligence.
5. Establish Audits and Checks
Establishing routine audits and checks is also essential for any organization utilizing artificial intelligence tools within their workplace. Content managers should measure how successful these tools are at saving time and whether the output is accurate. Compare apples-to-apples data and chart progress over time. It also helps to identify any potential areas of concern that need more attention.
6. Create Policies and Guidelines
You should also understand the legal implications of using generative AI and have a clear policy in place outlining how it can be used appropriately. Also, be aware of any potential bias in the AI algorithms and develop strategies for mitigating these biases. Additionally, have secure measures in place to protect sensitive data from being accessed by those not authorized to do so.
Georgia has groups working to establish a set of guidelines that will be released in the coming months. Municipalities that have established their own policies include Boston, Seattle, and San Jose.
7. Support Public Oversight
Many in the tech industry and in government advocate some kind of federal oversight to level the playing field for businesses and protect the rights of citizens. Europe is ahead of us in its passage of the world’s first Artificial Intelligence Act. The White House has identified 5 principles that should “guide the design, use, and deployment of automated systems to protect the American public.” They focus on safety and efficacy, data privacy, algorithmic discrimination protections, clear explanations, and human fallback mechanisms.