ARTIFICIAL INTELLIGENCE (“AI”) CONSIDERATIONS FOR NONPROFIT EMPLOYERS

Employers should consider how an AI tool is being used very carefully, before authorizing the tool’s use. This article will discuss the risks in more detail below, but a few important considerations are: 

-          Hiring: Using AI in hiring can save time, by sorting applications etc., but there are possible adverse effects, such as violations of state and federal laws;

-          Data Privacy: Using AI can have data privacy implications, as there are federal, state, and sometimes local laws governing data privacy;

-          Confidentiality and Intellectual Property: Using AI increases the risk of improper use/dissemination of confidential or proprietary company information, confidential employee information, and confidential information relating to applicants.

 Employment Law:

There are federal, state, and sometimes local laws pertaining to employment. These laws apply to almost all aspects of employment law, from hiring, to promoting employees, to termination. There are multiple laws pertaining to discrimination and retaliation.  

An AI tool used by an Employer may inadvertently violate Federal or State law. Employers should be very wary of relying on AI to filter applicants, for a variety of reasons. For example, a poorly designed or trained AI tool may have a bias, which the AI tool would then use in filtering applicants. Another issue could involve an improperly trained AI tool sorting applicants based on information that was not accurate, or not thorough.  

These situations could lead to claims against the employer, such as for discrimination.  

Federal Law: Under federal law (enforced by EEOC) it is illegal to discriminate against someone (applicant or employee) because of that person's race, color, religion, sex (including transgender status, sexual orientation, and pregnancy), national origin, age (40 or older), disability or genetic information. It is also illegal to retaliate against a person because he or she complained about discrimination, filed a charge of discrimination, or participated in an employment discrimination investigation or lawsuit. The law forbids discrimination in every aspect of employment. 

Massachusetts: Massachusetts law prohibits discrimination on the basis of a person's membership in a protected class, such as: Race (including natural & protective hairstyles), Color, Disability, Age, Religious Creed, Sex, Pregnancy, including Nursing or Other Pregnancy Condition, Sexual Orientation, Parental Leave, Gender Identity and Gender Expression, Marital Status, National Origin, Ancestry, Active Military Status, Veteran Status, Retaliation, Genetic Information, Criminal Record, Public Assistance. The law also forbids discrimination in every aspect of employment. 

In addition, if the AI tool is trained to focus on compensation and salary information, an Employer could inadvertently violate the Equal Pay Act, or the Massachusetts equivalent. Under the Equal Pay Act, employers cannot discriminate against employees because of their gender when deciding and paying wages. Employers cannot pay workers a salary or wage less than what they pay employees of a different gender for comparable work.  

Data Privacy:

As with employment law, there are various laws and official guidance governing data privacy and security, including the use of AI; also, organizations that engage in cross-border work or transactions are subject to laws of other countries which regulate AI.  Organizations are expected to have clear policies and procedures in place to address internal, permitted uses of AI:  Such protocols are most effective when different teams within an organization collaborate, such as marketing, information technology, program operations, management, and legal.  Board leadership is vital to an organization’s success in addressing AI laws and in mitigating risk.

 A fundamental step in privacy and security is assessing the data that comes into, or is generated by, an organization and how that data is used, stored, and destroyed.  This analysis will include determining what data is fed into AI tools; whether that data is confidential, proprietary, or sensitive; where the data resides; how it is used and shared by the AI tools; and whether/how that data can be erased.  Once information is intentionally or unintentionally shared with an AI tool, the organization loses a measure of control.  Organizations will also want to conduct an AI inventory to ascertain the types of AI already in use, for which purposes, and whether changes are needed. For various reasons, AI notetaking is generally not recommended for board, committee, or other sensitive meetings.  Moreover, in Massachusetts and some other states, consent is required prior to making an audio recording of other persons (an essential component of AI notetaking):  Violation of these laws can lead to criminal and civil liability. 

 Questions to consider include: 

·         Does your organization know whether and how employees are using AI in their work? 

·         Are employees using free, publicly available AI tools?  If so, they may be putting the organization at risk of violating data privacy and security laws.  Data input by a user may appear in response to a third party’s prompt:  Is any sensitive information belonging to the organization being exposed in this way?

·         If the organization is interested in harnessing the capabilities of AI, does it license an internal AI tool and provide guidance and periodic instruction to employees regarding boundaries for AI use? 

o   Has the organization investigated where that data is stored and what will happen to it after the AI tool is no longer licensed for in-house use?

o   Has the organization considered parameters of acceptable use of AI tools by its employees, including which information it will allow to be inserted into an AI tool? 

Confidentiality and Intellectual Property:

Confidentiality:  Organizations will want to consider what information they permit to be inserted into an AI tool, whether any of it is confidential or sensitive, and how that information is being protected.  For instance, collecting information about race, ethnicity, religion, or sexual orientation may be protected under Massachusetts and other state laws.  The organization will want to stress the data privacy concepts of only collecting and using the minimal amount of information needed for a program and deleting that information (in accordance with the organization’s document retention policy) when it is no longer needed. 

Intellectual Property:  Use of AI triggers various intellectual property considerations. 

Third Parties.  In the news, you may have heard about controversy surrounding AI learning that entailed unauthorized use of authors’ materials.  Content generated by AI may include unlicensed materials, thus putting an organization at risk. Any third-party materials used by an organization should be done so with prior permissions in place and attributions.   

AI-Generated Content.  US Copyright law protects materials created by humans. Non-humans, such as AI tools, are not regarded as authors. Works that are a joint effort between a human and AI tool may present issues in obtaining copyright protection and even may not be copyrightable.  This are of law is still evolving and organizations are encouraged to seek guidance from legal counsel regarding intellectual property rights and ramifications.   

AI Joint Works.  AI outputs can contain errors, “hallucinated” information, or inapplicable information.  Employees should be required to review any work generated or edited by AI and revise the work as needed.  Essentially, AI is a tool to support employees’ work:  The organization is responsible for the final work product. 

Attribution.  Use of AI requires attribution.  Examples of attribution disclosures provided by the Commonwealth of Massachusetts include: 

·         This memo was summarized by [generative AI Tool] using the following prompt: “Summarize the following memo: (memo content)”.

·         The summary was reviewed and edited by [insert name(s)].

·         This code was written with the assistance of [generative AI tool]. 

(Excerpt from:  “Enterprise Use and Development of Generative Artificial Intelligence Policy,” MA Executive Office of Technology Services and Security, Enterprise Privacy Office, effective 1/31/25, page 5, Sec. 5.2.  Note that this Policy establishes requirements applicable to “all state employees, contractors, consultants, vendors, and interns, including full-time, part-time, or voluntary…”  Policy at page 2, Sec. 2.1.) 

Questions to consider include: 

·         Are employees putting the organization at risk by entering confidential, sensitive, or unauthorized information into AI tools?

·         Are employees proofreading and editing any content drafted using an approved AI tool?  If the AI tool generates “hallucinated” or otherwise inappropriate content, it is the responsibility of the organization and its employees to identify, delete, and correct such information.

·         Is your organization creating material that it intends to copyright?  If so, the use of AI can pose an issue in obtaining copyright protection and the organization may want to consult intellectual property counsel.  

Recommendations:

-          Develop AI Policies and Procedures, which reflect how the AI tool is being used, define what is an allowed use, and also define what is a prohibited use;

-          Hold periodic employee training sessions;

-          Engage in cross-functional discussions as to how to safely but effectively use AI tools while mitigating risk to the organization;

-          Have a top-down approach to privacy, security, and AI, where an informed board of directors takes the lead for the organization;

-          Ask collaborators and third-party service providers for their AI Policies, especially if hiring is outsourced;

-          Keep up to date. This is a new area of the law, and laws and regulations are changing rapidly.