Using a GPT model to complete tasks can feel like "cheating" in the traditional way that we understand cheating. [Webster's dictionary](https://www.merriam-webster.com/dictionary/cheating) defines cheating as:
1: to deprive of something valuable by the use of deceit or fraud
2: to influence or lead by deceit, trick, or artifice
3: to elude or thwart by or as if by outwitting
Let's look at that first definition up-close.
"To deprive of something valuable" describes a transaction - or the absence of one - in which some value was supposed to be delivered, and it wasn't. When we think of the value of written documents, we often think of the effort and the time spent writing as the object of value (1). But there's another object of value going on just under the surface, one that we might not see quite as fast: the thinking behind the effort.
To recognize the thinking behind the effort, we have to step back and examine some properties of the object we're creating. We need to identify what result it's being used to drive towards, and what context we need that result in. We also need to know what an unacceptable result might look like so that we can prevent it. Stepping back like this makes it easier to recognize the under-the-surface thinking going on.
What do you want to use this output for? Some commonly cited use cases include college admissions essays, website copy, and marketing content - with the goals of demonstrating a prospective student's fitness for the university to which they're applying, helping users navigate a website and get appropriate information, and educating potential customers about a business's offerings. Your goals are likely to be different from any of these. Before prompting the GPT model, identify what you want it to do. This is a key touchpoint for human contribution in using AI. If you were meeting with a person to talk about this topic, what would you want to walk away with? It's okay to take charge here and set the agenda. In fact, it adds value when you do.
I recommend starting with a blank page, by yourself, to brainstorm. Handwritten is even better. Think of the "seed" of your idea: what is the smallest piece of insight that you want to expand upon? What is it that's piqued your attention to start this process? Write that down. Write down 2-3 "seeds" before opening up the GPT, adjusting your verbiage each time. Notice what the things you DO want and the things you DO NOT want from the AI's output.
The GPT Model doesn't know your context. Even if it did, it wouldn't understand it as well as you do. Because it has access to a lot of information, it needs help filtering what's relevant. By providing some constraints in your prompt, you help the GPT focus its output. Without these constraints, it's more likely that you get generic output that doesn't add value. So, it becomes important to set constraints to make sure that the output can be used for what it's meant for. This is another key touchpoint where the value added relies on your thinking.
At least for now, AI still hallucinates things that are not true. If it's important that the output is factually correct, go through it slowly and verify that the information presented is true. Correct things that aren't true. As of this writing, it's generally faster to make these edits yourself than to try and teach the GPT to do it.
Finally, the most important key touchpoint: owning your decisions. The AI can't make decisions to act for you any more than your calculator (remember those?!) could. The AI is helpful when brainstorming, and can help you refine ideas, but it can't do your work for you. Ultimately, this responsibility is why there's still so much value in work done with AI, and how you're still delivering value (and therefore, not cheating). Let it write the documentation; you're still doing the hard part of the work.
The nature of that work isn't what we're used to. The value that you deliver isn't necessarily the effort spent on writing; the value is in the results that that writing supports. A GPT model is just one tool amongst others that can be used to make that results-delivery process faster and easier. And, like other tools, it relies on your use of it to generate helpful outputs. If you'd like support identifying this work and integrating GPT tools into your process, let's schedule a call.
Professor Ethan Mollick wrote here about how this thinking is going to have to shift (June 2023).