Field Guide 22 February 2026

So You've Become the AI Person: Ten Things Nobody Warned You About

A comprehensive, thoroughly demoralising field guide to AI literacy training, covering everything from performing competence in post-graduate mathematics to the specific psychological horror of demonstrating Copilot to a live audience. Consider this the orientation nobody gives you before they hand you the lanyard.

Consider this a public service announcement. If you are, right now, standing at the precipice of becoming your institution's AI person - if someone in management has looked at you with the particular expression of a person who has found a volunteer and doesn't intend to explain the role first - I want to help you. Not by making it sound appealing. By making it sound accurate.

I am, technically speaking, the AI literacy trainer at my institution. I say "technically" because when my colleagues need to know something about artificial intelligence they send me a Teams message instead of Googling it, which is either a vote of confidence or evidence that they find me easier to navigate than a search engine. There is no actual job title for this. My official designation is digital literacy trainer at MVIT, which is a real role with a real box on the organisational chart. The AI training materialised separately - not because anyone planned it, but because it needed to exist, nobody else was doing it, and I was standing close enough to the problem to get assigned to it. This is, in my experience, how most important roles in education are created. Not through strategic workforce planning. Through proximity and a willingness to say 'yeah okay!' before the question has been fully formed.

What follows is the field guide that did not exist as I 'evolved' into this role. The ten things. The comprehensive inventory of experiences that will shape your professional life, organised for easy reference and delivered with the warmth of someone who came out the other side and is still, technically, standing.


1

You Will Need to Appear to Understand Mathematics You Have Not Studied - Instantly

Here is the thing about explaining how AI works: it requires explaining mathematics. Not vaguely gesturing at mathematics while saying the word "algorithms" and hoping the room fills in the gaps with whatever it believes algorithms to be. Actual mathematics. Vectors. Matrices. Attention weights. Calculus derivatives that govern how a model adjusts its parameters during training, described in the notation of people who spent seven years earning the right to write those symbols.

You have not studied this mathematics. You know this. Your audience does not know this, which is the entire load-bearing foundation of your professional credibility.

What you will do is learn just enough to speak fluently and not enough to be cross-examined. You will understand the intuition behind backpropagation without being able to perform backpropagation. You will describe gradient descent using the metaphor of a ball rolling downhill through a loss landscape - which is accurate enough to be useful and vague enough to survive the first follow-up question. You will, in short, learn to explain calculus the way a travel writer describes a country they've visited once: with confidence, selective detail, and the quiet prayer that nobody in the audience has actually lived there.

The moment of maximum danger is the deepening question. There are two kinds of questions you will receive: the clarifying question and the deepening question. The clarifying question is: "So the model adjusts its weights to reduce the error?" The deepening question is: "Can you walk us through the chain rule application in that context?" The answer to the deepening question is "absolutely, but let me bring us back to the practical application here" - delivered with the serene composure of a person stepping backward from the edge of a cliff and pretending they simply chose to stop walking.

You will become genuinely good at this. The retreat will look, to the untrained eye, exactly like a pivot. This is a professional skill. Cherish it.


2

Imposter Syndrome: Chronic, Structural, and Probably Accurate

The cruel thing about imposter syndrome in this particular field is that it is, at least partially, correct. Expertise is a moving target. The field changes fast enough that the person who read everything last month is behind by Tuesday. This should, in theory, level the playing field. In practice, it means everyone is a partial fraud and the successful ones are merely better at concealing the partial.

You will receive compliments. Someone will call you an expert to your face, possibly at a conference, possibly into a microphone. You will accept this compliment with the visible composure of a poker player who has just been dealt three aces and is desperately hoping nobody asks for a hand reveal. Inside, a very different conversation is happening. "Expert?" your internal monologue will say, in the tone of someone offered a gift they plan to throw away the moment they're alone. "Expert in what? The things you explained before anyone checked your working? The model capabilities that were accurate last Thursday? The features you demonstrated last week that Microsoft quietly discontinued over the weekend without telling anyone, including themselves?"

Everyone is a partial fraud. The successful ones are merely better at concealing the partial.

The professional advice - the real advice, from people who have survived this - is to reframe expertise as "being further along the same uncertain path as everyone else." This works. It is also, and I say this with affection, the kind of reframe that sounds excellent until 2am on a Tuesday when you discover there's an entire category of model architecture you've been mispronouncing for six months and you've now mispronounced it into a recorded webinar that forty people watched. At which point the reframe collapses and you simply lie there in the specific darkness of someone who knows what they don't know, and that is quite a lot.

It passes. It always passes. And then a new thing is announced, and it starts again.


3

You Must Straddle a Tightrope Between AI Cheerleader and Existential Doomer, and the Tightrope Is on Fire

Your actual professional position is this: AI is transformative, exciting, and full of genuine potential for education and productivity. It is also reshaping labour markets, embedding bias, concentrating power in the hands of a small number of companies with inadequate oversight, and advancing at a pace that outstrips every governance framework currently attempting to catch up with it. Both of these things are true simultaneously. Both are important. Neither cancels the other out. Your job is to communicate both, in the same breath, to an audience that would very much prefer a simple answer.

They don't get one. Nobody gets one. The person giving you a simple answer is either not looking hard enough or is selling something.

What you will learn to say is: "It depends on how we choose to use it." What the room hears is: "She doesn't know either." What you mean is something closer to: "The technology is genuinely dual-use in ways that require ongoing institutional and policy engagement," which is accurate but has the rhetorical punch of a wet paper bag and you know it.

The comedy of the tightrope is that it will periodically fling you off into one extreme and the overcorrection will fling you into the other. Monday: inspiring presentation about AI and accessibility for learners with disability, you go home feeling like a visionary. Wednesday: you read something about hallucinations in a medical AI context and spend Thursday in a private ethical crisis about whether you've been irresponsible. Friday: recalibrated back to cautious optimism, just in time for the weekend. Saturday: a new model is released. The cycle restarts. You do not get a day off. The tightrope does not take annual leave.


4

Everyone Will Assume You Didn't Write That

You wrote it. You wrote all of it. You sweated over the word choices, restructured the argument four times, rewrote the introduction at 11pm because it wasn't quite right, and you care - deeply and specifically - about the difference between a good sentence and a merely adequate one. And then you handed it to someone who read the first paragraph and thought: ChatGPT.

This is the specific tax levied on people who work in AI literacy. The better the writing, the more suspicious they become. You have, through no fault of your own, ended up in a profession where demonstrating your craft makes people doubt your authorship. The only solution, the only way to prove you wrote it - would be to produce worse work, and you're not willing to do that. Partly out of professional pride. Partly because the AI can produce worse work faster and with significantly less psychological damage than you can.

The thing you cannot say, but want to say, every single time: "I know exactly what AI-generated writing looks like. I teach a course on it. The reason this doesn't read like AI is that I spent forty minutes on the second paragraph alone, which is a thing a language model does not do, because it does not care about the second paragraph, because it does not care about anything, because it is a very impressive autocomplete that has never once experienced the particular despair of deleting a sentence you worked on for twenty minutes because it was almost right."

You will not say this. You will smile and say "I do use AI as a thinking partner sometimes," which is true and which is also, in context, a capitulation. The irony is structural and load-bearing. You are living inside it.


5

Everyone Expects You to Produce Ten Times the Work (Because of the Tools You Can't Access)

Because you work in AI and therefore have access to tools that make everyone ten times more productive - this was announced at a conference, there were slides, someone used the phrase "productivity multiplier" without visible embarrassment - you are now expected to produce ten times the output. The fact that you spend approximately 40% of your professional energy keeping up with the field, another 30% attempting to access tools your institution hasn't properly licensed, and the remaining 30% actually building things does not appear in the productivity presentation. The maths in the productivity presentation is optimistic in the way that all maths is optimistic when it's trying to justify a purchasing decision.

What nobody says out loud: AI makes productive people more productive. It does not make the hours in the day longer. It does not speed up the IT procurement process, which currently operates on timelines best measured in geological epochs. It does not fill in the compliance documentation. It does not attend the meeting about the meeting. It does not navigate the SharePoint folder structure that was reorganised by committee while you were in that meeting, nor does it file the resulting confusion under a heading that anyone can find.

What AI has genuinely done for you: automated some things that used to take time. What this has produced: more time, which has been immediately colonised by more expectations - specifically, the expectation that you will now train everyone else to use the tools you've been quietly hoarding. The net result is identical to before, except now you're also running workshops every second Thursday for public servants, educators, small business owners across the Territory who were sent there by managers who believe "attending an AI session" is the same as "becoming proficient with AI."

It is not the same thing. You know this. You cannot say it out loud. You smile and confirm the booking.


6

Saying "Neural Network" Is a Capital Offence

There is a level of technical specificity that is considered appropriate in AI literacy training, and it is considerably lower than you think.

Say "neural network" and you will be told you are overcomplicating things. Reference "inference" and you will see the specific glaze descend over the room - a look somewhere between polite disengagement and the onset of mild panic. Mention "attention mechanism" and you might as well stand up and announce that the rest of the session will be conducted in an extinct language. Suggest that someone might want to understand the difference between training data and inference, and you will be asked whether this is "really necessary for practical use," which is a question that translates as: "Can we go back to the bit about writing emails?"

The correct vocabulary for AI literacy training is: "it's a bit like," "imagine if," and "think of it as a very sophisticated autocomplete." That is the entire lexicon. You will spend forty hours understanding the actual mechanics and then distil the whole thing into a metaphor about a golden retriever who has read the entire internet and is doing its best to guess what word comes next. This is simultaneously accurate enough to be useful and inaccurate enough to make you wince quietly every time you say it.

The vindication, when it eventually arrives, is silent. Three months later someone will mention casually to their colleague that AI is "basically a very smart autocomplete that learned from everything on the internet." And you will feel the specific, quiet satisfaction of an idea that has successfully completed its journey from your brain to theirs with most of its meaning intact, which in this profession counts as a major win.


7

Your YouTube Algorithm Is Now a Horror Show and You've Developed a Clinical Taxonomy for It

Somewhere around your third month in the role, the algorithm noticed you. Not you as a person - you as a content category. You watched one video about AI in education and the algorithm made a note and has been feeding you its findings ever since, with the tireless enthusiasm of a research assistant who has been given one brief and has interpreted it very, very broadly.

Your recommendations now consist of: fourteen videos about how AI will destroy higher education (six of which are the same video with different thumbnails); nine webinar recordings from institutions whose approach to the crisis you have now catalogued with the clinical precision of a naturalist documenting subspecies in a disappearing ecosystem; seven tutorials demonstrating Microsoft Copilot features that no longer exist in the interface shown; and one video from 2019 about blockchain in education that the algorithm appears to have included as a form of historical irony, or possibly a threat.

You have developed, through sheer unavoidable exposure, a working taxonomy. You can identify the genre from the thumbnail alone. The Visionary, who has a framework and the framework has arrows. The Cautious Reformer, who believes everything will be fine if we just redesign our assessments, which will take approximately one career and three committee approvals. The Vendor, who has a product and statistics that would embarrass a first-year research methods student. The Person Who Has Discovered Critical Thinking About AI and is giving you the full tour whether you booked it or not. And, most reliably, The Panel Discussion, in which four people who broadly agree about things speak at length about those things while a moderator runs ten minutes over and thanks everyone for "such a rich conversation."

You have stopped watching most of them. You scroll the comments instead, where the real discourse lives - written by practitioners who are tired in the specific way that only comes from having the exact same conversation inside an institutional context for two years, and who say so, at length, in a comments section that no keynote speaker will ever read.

But the algorithm is only half the problem. The other half arrives in your inbox, and it has the structural properties of a tidal bore.

Because you are the AI person, your colleagues now forward you things. Every colleague. With extraordinary regularity. Each message arrives with the sincerity of someone who genuinely believes they are doing you a favour, and they are - in the same way that someone who has just discovered that water is wet is doing oceanographers a favour by letting them know. The subject line is always some variation of "thought this might be of interest," "in case you haven't seen this," or the devastatingly non-committal "FYI." The attachment, the link, the screenshot: something that appeared in their newsfeed this morning because the AI content has not yet colonised their algorithm the way it has colonised yours. For them, this is a novelty. For you, it is a Tuesday.

The content arrives in three reliable categories. First, there are the new wrapper apps - presented as groundbreaking, and sometimes genuinely interesting, though you have now received approximately forty "Have you used this?" messages about tools that sit on top of the same underlying model you use every day dressed in different colours. Second, there are the productivity miracles - apps that will turn your workflow into a frictionless paradise using AI, most of which have a free tier that expires in fourteen days and a paid tier that costs more than your rent. Third, and this is the category that truly separates the AI-person experience from any comparable professional role: the AI-enhanced solutions to problems that have never existed.

Someone found an app that uses AI to turn pet photos into three-dimensional talking mascots. These mascots can, with appropriate prompting, be toggled into "learn mode" to accompany a child through their maths homework. Your colleague has sent you this. Your colleague is asking: "Have you seen this? Could we use it in training?" You consider the question seriously for approximately three seconds before concluding that deploying a talking AI dog to guide actual grown up adults through your AI workshop content would raise more questions than it answered, and respond with the diplomatic warmth you have spent eight months developing in precisely these situations.

Then there are the cybersecurity incidents. You are, it bears repeating, a cybersecurity person. There are thousands of significant AI-related security incidents occurring with increasing frequency across every sector. They are not minor. They are not abstract. They represent a set of rapidly evolving threat vectors that the industry is genuinely struggling to get ahead of. And yet what arrives in your inbox each week is a single, vivid, retail-grade anecdote, forwarded as a first rare glimpse of a concern you may, until this very moment, have been entirely unaware of. A story about a rogue AI agent whose human was a casual McDonald's employee who found the admin login for the order kiosk and used it to generate AI images of speculative McFlurry flavours. Forwarded with the earnest subject line: "Bit concerning from a cyber perspective ...thought you should see this."

You thank them. You do this every time. You have developed, through sheer repetition, a response that is warm, acknowledges the sender, and does not in any way convey that you have a folder - an actual folder - labelled "Inbox Articles Others Think I Haven't Read," which is currently holding 847 items and growing at approximately twelve per week. You are grateful for the community it represents, even as it is slowly burying you. The FYI tsunami is, at its heart, evidence that people care. That they're thinking of you. That the topic is landing in the broader consciousness. This is what you wanted... This is what you wanted... This is what you wanted.


8

Copilot Exists Specifically to Make You Look Foolish in Public

Microsoft Copilot is not a tool. It is a live performance piece about the fragility of professional credibility, and you are always the audience, sometimes the stage, and on one notable occasion the person standing in the burning theatre insisting that the smoke is a scheduled feature.

When you are not in a training session, Copilot is merely unreliable, inconsistent, occasionally impressive, usually surprising, always changing in ways that are not announced anywhere you can find. When you are in a training session, it becomes something categorically different. Something that appears, with the focused precision of a system that cannot technically have intent, to select the exact moment of maximum witness to produce an interface nobody has seen before.

The Copilot live training experience proceeds as follows. You have prepared. You have tested every step. You have a screenshot backup. You have a contingency for the contingency. You walk the group through Step 1 with the composure of a surgeon who has performed this procedure many times: "Can you explain how I change a formula to an absolute reference?" — a question so foundational that Excel has been answering it, via the F4 key, since approximately the fall of the Berlin Wall. Phew. It responds. You exhale. You paste the actual syntax. Step 2 works. You feel, briefly and dangerously, like a person who is in control of events — which, in Excel Copilot training, is a sign the universe is on your side today.

It is not on your side today.

Step 3, which worked four times in practice, which you tested this morning, which you tested again ten minutes before the session because something felt wrong and you couldn't identify what — produces this: "I'm sorry, I'm not able to help with that right now." No elaboration. No error code. No indication of whether "right now" means the next thirty seconds or the remainder of the financial year. Just a polite, implacable refusal, delivered with the serenity of a system that has decided the session is over. And then (and this is the part that genuinely tests your optics of self-confidence) — you try a follow-up. Something simple. Something involving the actual spreadsheet data you spent the last ten minutes walking Copilot through, the data it summarised, the data it referenced by column name with apparent confidence approximately thirty seconds ago. And Copilot freezes — the panel remains blank. This is Copilot's equivalent of a baffled face — the expression of someone who has never seen this spreadsheet before. The column headers it just quoted back to you: unknown. The entire context of the last ten minutes, carefully established, gently confirmed, apparently solid: dissolved without ceremony.

Meanwhile — one participant has raised their hand to note that their version doesn't seem to have the menu item you just pointed at. Another has it but it appears to be doing something different with the cheerful autonomy of a system that has decided to be helpful on its own terms, in its own time, with no regard for the pedagogical moment currently in progress. The thing that does the actually useful thing you want is now located in a completely different position than where Microsoft's own documentation says it lives, because Microsoft updated the documentation on Tuesday but updated the product on Monday and nobody in those two teams spoke before publishing.

You will learn to turn these MS AI situations into 'teaching moments' by framing the incident as follows: "Interestingly, mine seems to have updated! This is actually a great opportunity to see how quickly these interfaces change." This is called graceful recovery. It is the AI training equivalent of watching your cake collapse and pivoting to a lecture on the Maillard reaction.

The room will nod while also watching you with the careful attention of people who are now genuinely uncertain whether this is supposed to be happening, which is a question you are also privately asking, but from inside it, which is worse. You continue. You do not look at the part of your brain that is silently screaming. You have learned not to look there during working hours.


9

3am: The Tally of Everything That Needs Updating

You wake at 3am and you take the tally. It is not a choice. It is a condition. The tally arrives unbidden, with the reliability of a standing meeting that nobody scheduled and nobody can cancel.

Tonight's inventory: The section of your training material describing GPT-4 as the most capable available model, now outdated several times over, updated each time with declining enthusiasm. The section on AI detection software, still accurate, in that it is still wrong, but the specific products referenced have been acquired, pivoted, or quietly discontinued. The activity asking students to compare two AI writing assistants, one of those assistants was purchased and relaunched under a different name with different pricing and the other pivoted to enterprise and removed free access and you found out in the way you always find out, which is by trying to demonstrate it in front of people. The pricing section: historical. The image generation module: archaeological. The opening bit that reads "AI is changing rapidly": accurate, but now an epic understatement.

The tally runs for forty minutes. You do not get back to sleep. You open your phone and begin making notes, which is the medically inadvisable thing to do at 3am but which is also the only thing that quiets the tally.


10

The Question You Cannot Answer (But Have to Answer Anyway)

There is a question that gets asked in every training session, every workshop, every conference presentation, every meeting where AI comes up for more than four minutes, and it is asked earnestly, with genuine anxiety, by people who have dedicated years to a craft and need an honest answer:

Is this going to take my job?

The honest answer - the evidence-based, intellectually defensible, non-vendor-sponsored answer is this: some roles will change significantly, some will diminish, new roles will emerge that don't exist yet, the transition will be uneven and not uniformly kind, and anyone who tells you with confidence exactly how this resolves is operating on speculation dressed up as forecast. Anyone promising that every job is safe is selling reassurance. Anyone promising that every job is gone is selling panic, which is also a product with a market.

This is true. This is the correct answer. It is also, to a room full of vocational educators across the Territory who are asking in good faith whether the skills they've spent careers developing have a future, very nearly useless.

What they want is certainty, in whichever direction. What you can give them is the most accurate picture you can construct from the evidence available, clearly labelled as partial and provisional, delivered with the warmth of someone who genuinely cares about the people in the room and the honesty of someone who will not lie to them to make the next forty minutes easier. It doesn't land with the satisfying thud of a definitive answer. It doesn't produce the relief of certainty or the clean anger of a clear threat. It produces, instead, something quieter: the sense that they're being levelled with by someone who is in the same uncertain situation, just slightly further along the path, looking back and saying honestly, that the view from here is complicated but navigable.