Author: Han Yang MasterPa
Humanities workers don't create world change, but they are at the forefront of it.
Sometimes I feel that those accounts selling AI tutorials always treat AI as magic: giving you a magical prompt, and you can do anything. Reality, of course, isn't like that. For a period of time, because of founding FUNES, we had to produce a large amount of content daily using AI. In addition, there was the production of content such as *Ephemeral World* and my own writing, which meant that manpower alone was no longer enough. Therefore, we experimented extensively with how to use AI to assist our content marketing and humanities research.
Later, when new colleagues joined the company, I made a simple keynote. After hearing about this, Mr. Jia Xingjia from another platform invited me to give a presentation. My partner, Keda, and I named this presentation "A Guide to Using AI for Humanities Workers."
It was initially a purely private sharing session, mainly focusing on general principles. We did it several more times afterward, gradually expanding upon it. However, this sharing session had never been held publicly until this year when we launched the program "Poetry Combing the Wind" with Chongqing, so we had the first complete public discussion. The following text is compiled from the podcast "AI Usage Guide for Humanities Workers," with AI-assisted organization and some abbreviations. Over the past year, Keda and I have shared this experience on how to use AI with many friends who work in content creation, research, and knowledge product development. Its goal isn't to teach you to memorize a few magic prompts, nor is it to treat AI as a panacea; rather, it's more like a working method: allowing you to truly integrate large models into your writing, research, editing, topic selection, data organization, and production processes without writing code, and achieving **traceability, supervision, and verification**, so that you still want to be credited for your work. This approach stems from lessons learned in real-world projects: when content is mass-produced, relying solely on human labor will collapse; while having AI write an article directly can lead to illusions, laziness, and AI-like writing. Therefore, we had to transform creation into a production line, and the production line into an iterative system. Today, I don't want to give you various prompts directly; instead, I want to provide some key guiding principles and guidelines. Before the principles: Three bottom lines of this guide. Before discussing specific methods, let's clarify three bottom lines. They determine "how you use AI" and "why you use it this way." The process must be traceable, supervised, and verifiable. You cannot simply aim for a result and ignore the process. For humanities work, the black box is the most dangerous: illusions, misinterpretations, and misrepresentations can all quietly occur within it. It must be controllable. You need to be able to control how it's done, by what standards, where to slow down, and where to be strict. You're not "drawing cards," you're producing. Ultimately, you're still willing to sign your name. "Am I willing to put my name on it?" is the final quality check. If you're unwilling to sign your name, it's usually not a moral issue, but rather that your will wasn't implemented throughout the process—meaning the quality is uncontrollable.
Principle 0: Don't make wishes on AI, treat it as a workbench
Many people use AI in essence by making wishes: "Give me a good joke," "Help me write a good article," "Explain this paper."
The problem is—"Explanation" itself has countless interpretations: for laymen, for undergraduates, for graduate students, for colleagues, it's not a single task. AI cannot know your background, purpose, tastes, and standards by default. If you don't clarify, it can only give you the easiest answer using the default method of "average human."
Using a large model as a workbench means that you don't demand results from it, but rather use its tools to complete a process. Your task is to clearly define the task, the standards, and the steps. For example, let AI explain a paper. You can turn a wish-like request (explain this paper to me) into a workbench-style task like this: **Define your target audience:** Intelligent, curious graduate students who are not experts in the field. **Define your explanation style:** Heuristic, step-by-step, and academically rigorous. **Define your structure:** First, explain the significance, then provide background information, then reconstruct the research process, then explain the key technical points, and finally offer insights. **Define your tone:** Respect intelligence, avoid being condescending, and don't pretend the listener already has a deep understanding. You'll find that the more you present it like an "assignment requirement," the less AI it becomes, and the more it resembles a teaching assistant who can actually get things done. Principle 1: To make AI work well, first reflect on yourself—you are the person in charge. If you hired a secretary, you wouldn't just say, "Revise Han Yang's article about the American Rust Belt." You would definitely add: Why was this article written? Who was it written for? What's the current bottleneck? What problem do you hope it solves? What parts can't be changed? What style do you want? What are the metrics you care about most? The same applies to AI. You need to treat it as a very diligent and polite colleague, but one who **doesn't understand the implicit premises in your mind**. True "cue word engineering" isn't about techniques, but about a sense of responsibility: You are still the one doing the task; the AI is simply helping you. When you're dissatisfied with the AI's output, the most effective first reaction isn't "the AI is no good," but rather: Have I clearly stated the "target audience/purpose"? Have I provided sufficient background information and constraints? Have I broken down the "abstract desire" into "executable actions"? Have I provided a standard for judging right or wrong?
Principle 2: Ask at least 3 models for the same question—each AI has a "personality" and area of expertise
In our company, I encourage any colleague who is new to large models to ask three different AIs for each question during the initial use. AIs, like people, have differences: some are better at writing and word choice, some are better at reasoning and problem-solving, and some are better at code or tool calls. More realistically, even models from the same product, and new versions of the same model, will continuously have their "style" and "boundaries" fine-tuned.
So a simple yet extremely effective habit is: **Assume the same problem to at least 3 different AIs.** You'll quickly gain a feel for it: Which one writes better, which one thinks better, which one researches better, and which one is more likely to slack off? Which tasks are suitable for whom to do the "first draft," and which are suitable for whom to be the "reviewer"? Which one is better suited for generating "topic selection/structure," and which one is better suited for generating "paragraphs/sentences"? The value of this step isn't in "selecting the strongest model," but in that you begin to manage the model like you would a team, rather than treating it as a single divine oracle. Principle 3: AI is not omniscient—treat it as having the common sense level of a "good university undergraduate"
A very practical expectation management is:AI's common sense level ≈ a 985 university undergraduate.
If you think "even an excellent undergraduate might not know" something, then you should assume that AI doesn't know either; at least assume that it will "make it sound like it knows" when it doesn't.
This leads to two direct actions:
Anything beyond common sense, you will have to teach it.
You will have to teach it.
For example, if you want it to write jokes, truly unique and tasteful copywriting, or highly professional arguments—you can't just give it a simple "write better." You need to provide examples, standards, areas to avoid, and corpora. I believe it takes you some time to explain to your friends what constitutes good writing; how can you assume AI knows this by default? You need to treat it as a collaborating intern, not as a god. It can do a lot of "micro-interpolation" work: completing the scaffolding you provide, weaving the materials you give into readable text. But the "scaffolding" and "direction" still come from you.
Principle 4: Let AI Approach the Goal Step by Step—White-box Step-by-Step Approach is More Reliable than Black-box One-Shot Approach
The advantage of AI is not "giving you the correct answer directly," but rather that it can reliably complete many small steps within the process you design.The more you demand it to "do it all at once," the more likely it is to become a "seemingly complete but actually lazy" black box.
A particularly intuitive example is TTS (text-to-speech) or text-to-speech processing.
Instead of simply saying "Pay attention to polyphonic characters and don't mispronounce them," break the task down into a series of steps, such as: Marking pauses/stresses/changes in speech rate; identifying potential polyphonic characters; checking against dictionaries or authoritative pronunciations (searching first if necessary); pre-marking commonly mispronounced characters; and replacing them with homophones to eliminate potential mispronunciation. Humans will assume they will do these "obvious correct practices," but AI won't. If you don't include these "obvious" practices in the process, it will make mistakes on the easiest path.
Principle 5: Industrialize First, Then AI—You Can't Jump from the Agricultural Era to the AI Era
If your writing/research process is random, inspired, and lacks data management, then it's indeed difficult to hand it over to AI. This is because AI can only handle the "descriptive and reproducible" parts.
Why start with this story
Why choose this sentence
How to create examples
How to structure the narrative, how to transition, and how to conclude
How to connect small stories to a larger picture
Ultimately, it was broken down into dozens of steps, with different AIs performing only one step. The result is: it's not that the model suddenly became stronger, but that the process connected its ability to "do only a little bit at a time".
When you can clearly describe "how my article was written," you'll find that: what determines the upper limit of quality is never "which large model to use," but whether you have clearly explained your working methods.
Principle 6: Anticipate AI's tendency to be lazy—it will conserve computing power, so you need to remove "formatting obstacles" for it
AI will be lazy, and it's "systematically lazy": it won't open web pages if it can avoid doing so, it won't read PDFs if it can avoid doing so, and it will skip things if it can skip them. It's not that it's bad, but rather that, constrained by computing power and time, it naturally tends to take the least effort path.
Therefore, what you need to do is: Use AI's computing power on "understanding text," rather than wasting it on "processing formatting."
Highly effective modification methods include: Converting materials to plain text/Markdown as much as possible before feeding them to the AI; copying webpage content into clean text (removing navigation, ads, footnote noise); performing "fact extraction/structure extraction" on long materials before allowing it to write; and converting PDFs/EPUBs/webpages into searchable TXT files before performing subsequent tasks. You'll find that many people resist this kind of "manual labor," feeling that "machines should do the dirty work for me." But in human-machine collaboration, it's the opposite—the more you're willing to do a little mechanical work, the sharper and more reliable the AI's intelligence becomes.
Principle 7: Remember the limited context—make tasks as "compressed" as possible, don't expect them to "expand out of thin air"
AI has a context window and a "memory limit." Give it 20,000 words, and it may not remember much; give it 200,000 words, and it might only scan the titles. A vivid analogy is: lock a person in a small room for a day, give them a 200,000-word book, and ask them to memorize it—how much they can memorize is roughly the amount AI can "remember."
Therefore, there is a very counterintuitive but extremely important rule of thumb:
Compression is much easier than expansion.
Compression is much easier than expansion.
Principle 8: Resist the impulse to "I can fix it with a few tweaks"—Change the production line, not the result
Many skilled writers are most prone to failing in front of AI:
AI produces a draft that scores 59 points, and you think you can improve it to 80 points with a few tweaks, so you start revising; as you revise, you end up rewriting it; after rewriting, you say, "I'll do it myself," and then you never use AI again.
The solution isn't to work harder on "revising drafts," but to shift the focus upstream: Don't aim for AI to directly write a perfect 100. Your goal is to ensure the production line consistently produces 75-80 points. What you need to do is iterate the process to improve the "average score," not to make individual pieces perfect. Principle 9: Treat the production line as a product iteration—reliability itself is value. When you have a system that can consistently give you a starting point of 70 points, its value isn't "whether it's like you," but rather: You can get a usable draft at near-zero cost. You can focus your energy on higher-level judgments: topic selection, structure, evidence, taste, and trade-offs. What you need is not an omnipotent god to replace you, but a reliable factory: it's not perfect, but it's stable. Principle 10: Quantity First—Produce More, Then Filter. Only letting AI give you one version will usually result in the most mediocre, conservative, and "average" one. You need to use "quantity" to combat "mediocrity." A more effective approach is: Summary: 5 versions at a time. Introduction: 5 introductions at a time, A/B testing. Topic selection: 50 topics at a time, then group and select. Structure: 3 structures at a time, then combine. Expression: 10 different wordings at a time, then choose the best. When you improve the average score and increase output, you will naturally see "surprise samples" of 85 or 90 points in the distribution. Often, what's good isn't "that one stroke of genius," but that you've finally started working using statistical methods. Principle 11: Don't overstep your bounds—direct, taste, and rework like an executive chef
If you were the executive chef of a restaurant, you wouldn't personally smash cucumbers. You would:
Take a bite
Determine if it's acceptable
Give clear feedback (what's wrong, how to fix it)
Have the chef go back and redo it
The same applies to collaborating with AI.
The same applies to collaborating with AI.
You must respect its autonomy in "generating as it will"—your role is to **teach it how to meet your standards**, not to jump in and fix each result into a finished product. Otherwise, you'll be worn down by endless "patching and repairing." The final fundamental principle: Return to the real world—materials × taste determine the upper limit of a work. In the AI era, the quality of a work increasingly resembles: materials × taste. Models may change, methods may iterate, but two things remain constant: materials come from the real world. If you were given two choices to write an article: Using the latest model, but only able to use online resources; or using an older model, but with complete archives, oral histories, and on-site interviews, you are more likely to produce a good piece of work. Taste comes from long-term training. When "generation" becomes cheap, what's truly scarce is: Knowing what's worth writing about; knowing which evidence is stronger; knowing which narrative is more powerful; and being willing to put in the physical effort for the materials: searching high and low, meticulously examining documents. AI changes the efficiency and manner of your interaction with the materials; but the subject of the work remains you, and the object remains the materials. AI is merely a part of the "verb." Conclusion: Turn Anxiety into Skill Many people struggle with AI not because they're unintelligent, but because they're stuck in a cycle of "wish—disappointment—give up." What truly helps you overcome this is treating it as a workbench, engineering tasks, white-boxing processes, and developing a feel for it through continuous practice. When you can do this, you're less likely to hastily conclude "AI doesn't work"; you'll be more like a new type of professional capable of managing new tools: neither looking down on them nor looking up at them, but placing them within processes, within reality, and within the projects you choose to credit.