When AI Meets Higher Education

Jul 18, 2025

A clinical social work friend recently told me about a colleague using AI to document their clinical notes. The social worker does not disclose client’s information to the AI, but if they are using a free plan, then that means that the client’s “information” is being trained on the LLM (Large Language Model). I had asked if there was a policy in place to help either reinforce or curtail that behavior. She told me their boss is not a social worker and created a policy that really doesn’t match the needs of the profession. That conversation crystallized something I've been observing across higher education: we're creating AI policies without understanding how different disciplines require different approaches.

Imagine a social work student turning to AI for client diagnosis. Instead of learning symptomology, differential diagnosis, or cultural factors, they simply ask AI. Even with sophisticated reasoning models and carefully crafted prompts, AI remains inherently flawed and biased. It cannot capture the behavioral, emotional, and psychosocial components that define clinical social work. The student misses the fundamental learning process of observing, analyzing, and understanding human complexity.

This scenario from social work has parallels across higher education. Every discipline has its own version of this dilemma. Engineering students who rely on AI for design calculations without understanding underlying principles. Nursing students using AI for patient assessment protocols. Law students depending on AI for case analysis without developing their own legal reasoning. Each field requires distinct boundaries.

As the AI Technology Consultant at the Graduate College of Social Work, I advocate for discipline-specific AI policies. My position offers a unique vantage point. As a practicing social worker who learned AI, I understand both professional requirements and technological capabilities. This dual perspective reveals something crucial for all of higher education: AI policies cannot fit under one umbrella.

 
 

The Three-Tier Reality


MSW Classes: Foundations Need Flexibility

MSW courses build the foundation of social work practice. Students here can use AI in various ways, but they need guidance. These courses introduce core concepts, theories, and beginning practice skills. AI can enhance research, organize thoughts, and explore multiple perspectives. The key is teaching students to use AI as a learning partner, not a substitute for critical thinking.

Clinical Classes: Professional Boundaries Matter

Clinical courses demand different boundaries. We cannot have students learning to diagnose through AI, relying on it for clinical documentation, or using it for ethical consultations. The risk extends beyond inputting client information. The initial learning process of examining behaviors and understanding their clinical significance forms the bedrock of clinical social work. Even if an instructor fully integrates AI into their teaching, I would never advise students to use AI for diagnostic learning. We must model what we do in practice.

PhD Programs: Independent Scholars

Doctoral education presents another challenge entirely. While I'm familiar with PhD work, I'll gain deeper expertise when I begin my PhD in Social Work and AI at UT Arlington this fall. PhD programs develop independent scholars, not dependent ones. This means cultivating the ability to think independently, synthesize information critically, and formulate evidence-based hypotheses. AI could undermine this fundamental goal if used carelessly. Yes, AI can help locate research faster. But understanding when and how to use it requires input from professors who understand their program's specific scholarly development goals.

From Policy to Practice: My Human Sexuality Course


My online asynchronous human sexuality course, which began June 2nd, demonstrates this nuanced approach. We're using Perplexity AI as an academic search engine, but with careful structure. I created pre-approved reading spaces where students can explore curated information on relevant 2025 topics, including recent sexuality legislation. Each thread functions as its own chatbot for follow-up questions.

This transforms a static textbook into an interactive learning experience. During my weekly office hours from 12-8, students can drop in to see me work with AI tools. I model usage, not just explain it. They observe my process, my critical evaluation, my boundaries. This modeling teaches them professional AI integration better than any policy statement could.

The Missing Foundation: AI Literacy Before Policy


My professional advice remains consistent: before adopting any syllabus policy, ensure AI literacy training is in place. Creating policies without understanding creates problems rather than solutions. Faculty need to understand AI's capabilities and limitations within their specific teaching contexts.

Consider this comprehensive AI Integration Policy I developed:

"Philosophy: AI as Collaborative Learning Partner This course embraces artificial intelligence as a collaborative learning partner rather than a task-completion tool. Consistent with social work values of collaboration, empowerment, and shared decision-making, we approach AI integration through the lens of ethical practice, professional development, and enhanced critical thinking.

Core Principles

Transparency and Honesty All AI use must be documented and disclosed through mandatory declaration pages. This includes specifying which tools were used, for what purposes, and how they influenced your learning process. Academic integrity requires radical honesty about how AI contributed to your work.

AI as Learning Enhancement, Not Replacement AI tools should enhance your critical thinking, not replace it. Use AI to explore multiple perspectives, generate questions, organize research, and challenge your assumptions. Do not use AI to write assignments, make clinical decisions, or provide final answers to complex social work questions.

Professional Boundary Maintenance Never input confidential client information, personal details about peers, or sensitive practicum experiences into AI tools. Maintain the same confidentiality standards you would use in any professional context.

Cultural Humility and Bias Recognition AI tools carry embedded biases that may perpetuate harmful assumptions about sexuality, culture, and identity. You are required to critically evaluate AI responses for bias, especially regarding diverse sexual expressions, cultural practices, and intersectional identities.

The policy continues with detailed sections on permitted uses, prohibited uses, and mandatory declaration requirements. Every assignment requires students to document AI use, analyze alignment with NASW values, and reflect on ethical considerations. This transforms AI use from a hidden shortcut into a transparent learning process."

Moving Forward: Your Discipline Has Its Own AI Challenges

Every institution differs. Every discipline has unique pedagogical requirements. Information synthesis varies across programs. AI literacy levels range widely. When creating policies for any program, consult professors who teach in that specific context. Their perspective reveals nuances that general policies miss.

What are the "diagnostic moments" in your discipline? Where must students struggle with foundational concepts before AI can appropriately support their learning? These questions matter whether you teach social work, engineering, humanities, or business….any course.

AI integration in higher education isn't black and white. The moving pieces require constant attention, adjustment, and professional judgment. As we prepare students for a future where AI tools will be present in their professions, we must teach ethical integration while maintaining the core competencies of each discipline.

Stay curious,

Jason Founder