A Semester Begins Before the Syllabus

Aug 17, 2025


by Jason Fernandez, MA, LMSW

The blank document stares back. Summer break ends in three weeks and you haven't touched the course prep. You know the feeling. Coffee's gone cold, notepad's somewhere under last semester's grading, and that voice in your head keeps asking whether AI should be part of this process. Nobody taught us this part. We learned Bloom's taxonomy, backward design, student-centered approaches.

60 Watts of Clarity is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
 
But AI-supported curriculum design? That wasn't in the pedagogy courses.

I spent this past semester wrestling with the same question. Thirteen years in higher education, and suddenly I'm supposed to know how machines fit into teaching. The tech community make it sound easy. Type a prompt, get a syllabus. Except that's not how teaching works. Teaching starts in the gut, moves through experience, lands on paper last. The structure you see on week one emerged from countless micro-decisions about pacing, emotional scaffolding, when students need challenge versus support. So if we're bringing AI into curriculum design, we better respect that process.

I build my own AI agents to help me create my curriculum. Before that I, too, was using ChatGPT and other platforms to help me with my teaching. We have a tendency to ask for what I need, but there is something more that we should do. We need to frame every chat session with information that lets the Large Language Model (LLM) know what we need. We can do this by framing every new chat session with the following five elements: Role, Context, Task, Format, and Rules. I do this to make sure the AI understands we're building learning experiences. Without this framework, the model spews generic educational material that could work anywhere (which means it works nowhere). With proper framing the AI becomes a thinking partner instead of a content generator that the tech industry thinks we need.


Prompt Engineering Claude AKA Writing Well


The Role component changes everything because telling the AI "You are an instructor" fundamentally shifts how it processes your request. Research shows that being specific with clear, formatted instructions can improve accuracy by as much as 76%. When I specify "You are a patient, supportive, and collaborative AI agent that uses the knowledge of social work instructor," I'm activating specific knowledge domains within the model that align with how we actually think about teaching. The difference between asking for "help with a lesson" versus positioning the AI as a teaching colleague is the difference between getting a Wikipedia summary and getting nuanced pedagogical support.

Let me walk you through an actual example. Last month I taught Human Sexuality to my graduate students. I needed help with the model creating a 4 week class that was true to my voice, teaching standards, and within the curriculum to match CSWE standards. Although I build my own agents, I do have one general agent that can adapt to many instances. I decided to use this agent and opened my AI session with this prompt:

Role: You are a patient, supportive, and collaborative AI agent that uses the knowledge of social work instructor teaching a 4 week online Human Sexuality Class that uses CSWE mapping standards to ensure that the course meetings accrediting guidelines.

Context: You are teaching a 4 week asynchronous online graduate course for Human sexuality. You must meet the following CSWE standards: "I would list the standards"

Additionally, you must focus on the following instructional material "I would list it here or I would upload the document(s) for the AI to use, and I would tell the AI I am uploading them. The online course must use Perplexity AI, an ai powered search engine to help introduce AI search." This would go on until I am satisfied.

Task: Design a 4 week course that uses Canvas LMS that meets CSWE standards. Here is a workflow for you to follow:

Research CSWE standards to social work
Review Higher education pedagogy and standards to use when building curriculum.
Format: Create time blocks with topics and activities. Include discussion questions and at least one hands-on exercise. Use this syllabus (I uploaded ) to follow this format. No AI Fluff, no X, Y, Z sentence structure. Use academic and professional language

Rules: Use only terminology we've already introduced. Center diverse perspectives. Make space for students who might have personal experience with these systems. Do NOT create content that violates professional boundaries. Do NOT use outdated models of sexuality. If you're unsure about current CSWE competency language or Canvas functionality, ask for clarification rather than guessing.

Notice how the negative instructions prevent those moments where you get a response full of clinical terminology that would violate the educational boundaries of a classroom.

Research confirms that describing what NOT to do can be just as important as positive instructions. The system prompts in Claude contain 39 instances of "never" compared to 31 instances of "always" because negative examples help guide the tool's behavior, leading to faster outputs and fewer tokens needed to process the task.

What happened next proved the value of precision because instead of generic course modules, the AI asked: "Which CSWE competencies should I prioritize for graduate students who may already be in field placements?" and "Should the Perplexity AI assignments focus on research skills or client resource location?" These questions pushed me to clarify my own pedagogical goals. When I mentioned uploading documents, it asked which frameworks I wanted centered, forcing me to articulate assumptions I'd been making about what graduate students already knew.


Sometimes it’s a complete miss….


Sometimes the output misses completely. Last week I got a response that treated Human Sexuality like a clinical diagnosis course, all terminology and no lived experience. That's useful information too. It tells me I wasn't clear about the educational (not therapeutic) nature of the course. The best part is when the AI admits uncertainty. Adding explicit escape hatches like "If you're unsure about CSWE standards or graduate-level pedagogy, say so and ask for clarification" prevents those confidently wrong responses that waste everyone's time.

The hardest part? Remembering who holds authority in the classroom. Me. You. The person who sees when a student shifts uncomfortably during discussions of identity, who notices which examples resonate with students' field experiences. AI can help me plan for those moments but can't read them in real time. So while I use these tools for curriculum design, I never let them make pedagogical decisions. That distinction matters more than any framework or prompting strategy.

My colleagues often ask if this actually saves time. Wrong question. Sometimes it takes longer because the AI forces me to articulate assumptions I've been running on autopilot for years. But the curriculum that emerges feels more intentional, more responsive to what students actually need. Teaching graduate courses means holding space for students' professional development alongside their personal growth, and having a tireless thinking partner helps me plan for those intersections before they arise in discussion boards or synchronous sessions.

If you're staring at your own blank syllabus wondering where to begin, start with your teaching values. What do students need from you this semester? What professional competencies must they develop? Frame those answers clearly using the five-element structure I've shared, remembering to specify what not to include as clearly as what you want. Give the AI permission to ask questions when it needs clarity. The semester really does begin before the syllabus builds. It begins with understanding why you're teaching and who you're teaching for.

If you want to talk through your own AI teaching workflow or just need someone who's been in the messiness of figuring this out, book time with me through my Linktree. We're all learning this together, especially those of us who've been teaching long enough to remember overhead projectors but now need to figure out where AI fits.

With curiosity,

Jason