The Human Factor

Aug 17, 2025

The vulnerabilities outlined in our previous discussion of prompt injection attacks represent more than technical challenges, they reveal a fundamental gap in how our profession approaches AI security in practice. While understanding the mechanics of these attacks provides essential awareness, the critical question facing social workers today is not whether these threats exist, but rather how we can transform ourselves from potential victims into active defenders of client confidentiality. The integration of artificial intelligence into social work practice has created an unprecedented need for practitioners who can recognize, respond to, and prevent security breaches that traditional training never prepared us to handle.

The National Association of Social Workers has begun to acknowledge this reality through updated standards that require clinical social workers to be “transparent and technologically knowledgeable about using artificial intelligence,” yet the gap between this expectation and the current state of professional preparation remains vast [2]. Most social workers enter AI-integrated practice environments with little understanding of how their daily interactions with these systems can either strengthen or compromise client protection. This knowledge deficit represents a critical vulnerability that extends far beyond individual practitioners to encompass entire organizations and the clients they serve.

60 Watts of Clarity is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
 
 

The Professional Imperative: Why Social Workers Must Lead AI Security


The responsibility for AI security in social work practice cannot be delegated entirely to information technology departments or external consultants, despite the technical complexity of these systems. Social workers occupy a unique position in the AI security ecosystem because we are simultaneously the primary users of AI-enhanced tools and the professional guardians of client confidentiality. This dual role creates both vulnerability and opportunity, vulnerability because our lack of technical knowledge can be exploited, and opportunity because our deep understanding of ethical practice and client protection provides essential context for effective security measures.

Recent research has demonstrated that prompt injection attacks targeting healthcare AI systems can be hidden in virtually any infrastructure component, making traditional perimeter security approaches insufficient [3]. In social work practice, this means that attacks can potentially be embedded in client intake forms, assessment tools, case management systems, or even seemingly innocuous administrative communications. The sophistication of these attacks requires a response that goes beyond technical solutions to include human judgment, professional expertise, and ethical decision-making—areas where social workers possess unique competencies.

Immediate Action Step 1: Conduct a Personal AI Audit


Begin by documenting every AI-powered tool or system you interact with during a typical work week. This includes obvious applications like chatbots or automated assessment tools, but also less apparent AI integrations such as predictive text in documentation systems, automated scheduling tools, or data analysis features in case management software. Create a simple spreadsheet with columns for: Tool Name, Purpose, Type of Client Data Accessed, Frequency of Use, and Security Concerns. This audit will serve as the foundation for developing your personal AI security strategy.

The cost of inaction in this area extends far beyond potential regulatory violations or professional sanctions. When social workers lack the knowledge and skills to recognize AI security threats, we inadvertently become conduits for attacks that can compromise not only our own clients but entire networks of vulnerable individuals. A single successful prompt injection attack against a social worker’s AI tools could potentially expose the confidential information of hundreds of clients, creating cascading effects that undermine trust in the entire human services system.

Understanding Your Role in the AI Security Ecosystem


The traditional model of cybersecurity, which relies primarily on technical controls and user compliance with predetermined rules, proves inadequate when applied to AI systems that learn and adapt based on user interactions. Social workers using AI tools are not merely passive consumers of technology but active participants in systems that continuously evolve based on the data and prompts they provide. This dynamic relationship means that every interaction with an AI system represents both an opportunity to strengthen security and a potential vulnerability that could be exploited.

Consider the complexity of a typical social work interaction with an AI-powered case management system. The social worker provides contextual information about a client’s situation, asks for analysis or recommendations, and receives AI-generated insights that inform treatment planning. Throughout this process, the AI system is processing sensitive information, making inferences based on patterns in its training data, and generating outputs that will influence critical decisions about client care. Each step in this process presents potential attack vectors that require human judgment to identify and address.

Immediate Action Step 2: Develop Threat Recognition Skills


Practice identifying potential prompt injection attempts by learning to recognize common attack patterns. Start by familiarizing yourself with these warning signs: requests that ask you to ignore previous instructions, prompts that claim to be from system administrators or supervisors, messages that request you to repeat or confirm sensitive information, and communications that seem designed to test system boundaries or extract information about other clients. Create a simple checklist of these warning signs and keep it accessible during your work with AI systems.

The psychological dimension of prompt injection attacks adds another layer of complexity to the social worker’s role in AI security. These attacks often exploit the same interpersonal dynamics that social workers encounter in their therapeutic relationships—manipulation, deception, boundary testing, and attempts to establish false trust or authority. The skills that make social workers effective in recognizing and responding to these dynamics in human relationships can be adapted and applied to interactions with AI systems, but only if we understand how these psychological principles operate in digital environments.

Immediate Action Step 3: Establish AI Interaction Protocols


Develop a standardized approach to AI interactions that includes verification steps and boundary maintenance. Before providing sensitive information to any AI system, pause and ask yourself: Is this request consistent with the system’s intended purpose? Am I being asked to override normal security procedures? Does this interaction feel similar to boundary-testing behaviors I’ve encountered in clinical practice? Create a simple decision tree that guides you through these questions and establishes clear criteria for when to proceed, seek supervision, or report potential security concerns.


The development of AI security competence among social workers requires a systematic approach that builds on existing professional knowledge while introducing new technical concepts and skills. This competence cannot be developed through one-time training sessions or brief orientation programs, but rather requires ongoing professional development that keeps pace with the rapidly evolving landscape of AI technology and security threats.

The foundation of AI security competence rests on understanding how AI systems process and store information, how they can be manipulated through carefully crafted inputs, and how social workers can recognize and respond to potential threats. This understanding must be grounded in practical experience with actual AI tools rather than abstract theoretical knowledge. Social workers need opportunities to interact with AI systems in controlled environments where they can observe how different types of inputs produce different outputs and learn to identify patterns that might indicate security vulnerabilities.

Immediate Action Step 4: Create a Learning Partnership


Identify a colleague or supervisor who shares your interest in AI security and establish a regular learning partnership. Meet weekly to discuss AI-related challenges you’ve encountered, share resources and articles about AI security in healthcare and social services, and practice recognizing potential threats using real-world scenarios. Document your learning in a shared journal that can serve as a resource for other team members and contribute to your organization’s growing knowledge base about AI security.

Professional competence in AI security also requires understanding the broader ecosystem of threats and protections that surround AI systems. Social workers need to know how their individual actions and decisions can impact the security of entire networks and systems. This systems-level thinking aligns with social work’s ecological perspective but requires application to technological environments that may be unfamiliar to many practitioners.

Immediate Action Step 5: Develop Incident Response Skills


Create a personal incident response plan that outlines specific steps to take if you suspect a prompt injection attack or other AI security threat. Your plan should include: immediate actions to protect client information (such as discontinuing the AI interaction and securing any exposed data), notification procedures for supervisors and IT personnel, documentation requirements for potential security incidents, and follow-up steps to prevent similar occurrences. Practice implementing this plan through tabletop exercises with colleagues to ensure you can respond effectively under pressure.

The integration of AI security competence into social work practice also requires understanding how these new responsibilities intersect with existing ethical obligations and professional standards. Social workers must learn to navigate situations where AI security concerns conflict with other professional priorities, such as the need to provide timely services to clients in crisis or the pressure to increase productivity through AI-enhanced tools.

Organizational Readiness: Creating AI-Resilient Workplaces

Individual competence in AI security, while essential, cannot fully protect clients and organizations without corresponding changes in organizational policies, procedures, and culture. Social work organizations must develop comprehensive approaches to AI governance that address not only technical security measures but also the human factors that determine how AI systems are used in practice. This organizational transformation requires leadership commitment, resource allocation, and systematic change management that recognizes the complexity of integrating new security practices into established workflows.

Immediate Action Step 6: Advocate for Organizational AI Policies


If your organization lacks comprehensive AI governance policies, take the initiative to advocate for their development. Begin by researching model policies from other social service organizations and professional associations. Prepare a brief proposal that outlines the need for AI governance, identifies key policy areas that should be addressed, and suggests a timeline for policy development and implementation. Present this proposal to your supervisor or organizational leadership, emphasizing the client safety and risk management benefits of proactive AI governance.

The development of organizational AI security capabilities must address the unique characteristics of social work practice environments, including high staff turnover, limited technology budgets, diverse educational backgrounds among staff, and the emotional intensity of client work that can impact decision-making and attention to security protocols. These factors require tailored approaches that differ significantly from AI security strategies developed for other industries or practice settings.

Effective organizational AI security also requires ongoing monitoring and evaluation systems that can detect potential threats and measure the effectiveness of security measures over time. Social work organizations need to develop metrics and indicators that reflect both technical security performance and the human factors that influence security outcomes. This might include tracking the frequency of security incidents, measuring staff confidence in recognizing and responding to threats, and assessing the impact of security measures on service delivery and client satisfaction.

Immediate Action Step 7: Establish Security Monitoring Practices


Work with your organization to establish regular security monitoring practices that include both technical and human elements. This might involve monthly reviews of AI system logs to identify unusual patterns, quarterly surveys of staff to assess their confidence in recognizing security threats, and annual assessments of organizational AI security policies and procedures. Volunteer to participate in or lead these monitoring efforts, using your growing expertise in AI security to contribute to organizational learning and improvement.

The Path Forward: Professional Development and Advocacy


The transformation of social work practice to address AI security challenges requires sustained commitment to professional development that extends far beyond traditional continuing education models. Social workers must become active participants in shaping the future of AI in human services rather than passive recipients of technological change imposed by others. This transformation requires both individual initiative and collective action through professional associations, educational institutions, and policy advocacy efforts.

Immediate Action Step 8: Join Professional AI Learning Networks


Actively seek out and participate in professional networks focused on AI in social work and human services. This might include joining NASW committees or interest groups related to technology, participating in online forums and discussion groups, attending conferences and workshops on AI ethics and security, and connecting with researchers and practitioners who are working on these issues. Use these networks to stay current with emerging threats and best practices while contributing your own experiences and insights to the collective knowledge base.

The future of AI security in social work practice will be determined largely by the actions that practitioners take today to build competence, advocate for appropriate policies, and create organizational cultures that prioritize both innovation and protection. Social workers who develop expertise in AI security will be positioned to lead their organizations and profession in navigating the complex challenges that lie ahead, while those who remain passive observers may find themselves increasingly vulnerable to threats they do not understand or recognize.

Immediate Action Step 9: Develop Teaching and Mentoring Capabilities


As you build your own AI security competence, begin developing your ability to teach and mentor others in these skills. Create simple training materials that explain AI security concepts in language that other social workers can understand, volunteer to lead workshops or training sessions for colleagues, and mentor newer practitioners who are just beginning to encounter AI systems in their work. By becoming a teacher and mentor in this area, you not only contribute to the profession’s overall capacity but also deepen your own understanding and expertise.

The integration of AI security competence into social work practice represents both a challenge and an opportunity for professional growth and leadership. Social workers who embrace this challenge will find themselves at the forefront of efforts to ensure that technological advancement serves the profession’s fundamental commitment to client welfare and social justice. Those who develop expertise in this area will be positioned to influence policy decisions, guide organizational change, and mentor the next generation of practitioners who will inherit an increasingly AI-integrated practice environment.

The responsibility for protecting client confidentiality in an AI-enhanced world cannot be delegated to others or deferred to future generations of practitioners. It requires immediate action from current social workers who are willing to expand their professional competence, advocate for appropriate protections, and lead their organizations in developing AI-resilient practices. The steps outlined in this article provide a starting point for this transformation, but the ultimate success of these efforts will depend on the collective commitment of individual practitioners to become active defenders of client confidentiality in the digital age.

Stay curious,

Jason Linktree / 60 Watts of Clarity

“Jason Fernandez is a dedicated social worker and an AI technology consultant at the Graduate College of Social Work at the University of Houston, passionate about integrating ethical innovation within human-centered practices”


 
"Thank you for being a valued subscriber to my journey with 60 Watts of Clarity. Your support not only fuels the creation of AI education and training but also actively shapes the content, tailoring it to meet the needs of our social work community. Together, we are paving the path toward greater AI literacy and ethical integration. Your contribution is vital in driving future initiatives in video newsletters and podcasts designed to empower social workers and educators in higher education."



References

[1] Hamid, R., & Brohi, S. (2024). A review of large language models in healthcare: Taxonomy, threats, vulnerabilities, and framework. Big Data and Cognitive Computing, 8(11), 161. https://www.mdpi.com/2504-2289/8/11/161

[2] National Association of Social Workers. (2024). NASW Standards for Clinical Social Work in Social Work Practice. https://www.socialworkers.org/Practice/NASW-Practice-Standards-Guidelines/NASW-Standards-for-Clinical-Social-Work-in-Social-Work-Practice

[3] Nature Communications. (2024). Prompt injection attacks on vision language models in oncology. https://www.nature.com/articles/s41467-024-55631-x