AI at the workplace is going nowhere… so embrace and be adept at it
Pic/iStock
Chat GPT and all other AI chat models are here to stay and be part of our work culture, so instead of fighting it, bell the cat and make it dance to your tune.
For the uninitiated, prompt engineering refers to the art of crafting effective prompts to get the desired responses from the chat model. This is a crucial skill because Large Language models are trained on a variety of data sets, both real and now synthetic and need clear, logical prompts to give you the desired information.
ADVERTISEMENT
This is a skill that can be learned, it requires little or no prior knowledge, and your skill will improve the more you practise. Using prompt engineering will also help you extract the most out of ChatGPT, regardless of if you plan to use it for school or office work. But before we know how to game the system, we have to know a little bit about how it all works.
What is LLM, and how it works
A Large Language Model (LLM) is a type of AI designed to understand and generate human-like text based on the patterns it has learned from massive ,amounts of language data. It works by breaking down text into tokens (small units of language like words or subwords) and analysing the relationships between them. When you give an input, the model predicts the most likely next words or sentences by using patterns and statistical relationships it learned during training. Essentially, it “completes” your input while mimicking natural language, allowing it to respond to questions, write text, translate languages, etc. This is also the reason that you should not blindly trust anything that comes out of an LLM.
It is merely, in simplistic terms, predicting the next likely word, like auto-correct does. So how do we get the right information from the LLM? With a good prompt, of course.
What makes a good prompt?
Four Cs that make a good prompt: Content, Clarity, Context, and Constraints.
Content: Go straight to the information you need. Use direct language and include key details. If you are looking for a particular format for the answer, include that. For example, “Create a table with the output” or you could provide some samples and ask the LLM to mimic the style of the response to that style. You can invoke personas to improve your output, but more on that later.
Clarity: It is very important to know what you are looking for and be able to explain clearly to the LLM what you want. Even a little domain knowledge is important for this step. For example, if you are using ChatGPT to write some Python (a programming language) code for yourself, you will need to know at least the basic functions or have a basic understanding of how Python works. Having this knowledge will give you the ability to write a much more detailed prompt and get results faster.
Context: LLMs have been trained on a huge amount of data that they use to predict text. Some of this data may be irrelevant. So, always give the LLM context.
For example, write an essay about AI would not have any context; it will generate things that might not be in the scope of your essay. Instead, if you give it context by saying, ‘Write an essay about use of AI in food marketing’ in 500 words using American slang. Embellish your prompts as much as possible with all the details you can. Adding documents and tables can enhance the output closer to desired output.
Constraint: There are several ways to constrain the output of ChatGPT. An easy one is word count, as in the case of the essay example. A word count will make sure that the LLM generates the right amount of text needed for the assignment. You can also give the LLM emotional constraints, like mentioning that the essay is a significant part of your grade and that you need it to be the best essay about AI in the world. Also, please refer to the clarity point here because you need to have some domain knowledge to know that the LLM is not hallucinating the facts in the essay.
Make helpful Personas
Prompts also benefit from creating personas. For example, you need the same essay on AI. Tell ChatGPT to pretend that it is the foremost expert on AI and that everything it says is backed by scientific papers. This persona has a two-fold benefit: the first is that it will pretend to be that expert, and the second is that it will now reference all it says against scientific papers and articles. You can also ask it to cite sources for additional flair. However, this doesn’t protect the LLM from hallucinating, and you will need some domain knowledge to cross-check. You can also use this persona to check your work.
Practise and play around
Prompt Engineering is also about poking, prodding, and pushing the limits of the LLM. So, if you don’t get your desired result on the first try, keep trying different approaches. Eventually, you will figure out the best way to work for your use case.
