AI Policy

If you have any concerns as a customer of Reputation Leaders or a member of the public about how we use AI, please contact Laurence Evans (Laurence.Evans@reputationleaders.com) or David Lyndon (David.Lyndon@reputationleaders.com)

Reputation Leaders recognizes the benefits of artificial intelligence (AI) as both a tool for improving productivity and an integral element in workflow processes. Our AI Acceptable Use policy is designed to outline clear guidelines and best practices for using AI responsibly and ethically within Reputation Leaders. It aims to ensure that our employees employ AI tools in a way that reflects our company values, complies with legal and regulatory obligations, and safeguards the interests and welfare of all our stakeholders.

 

Overview of AI Usage

At Reputation Leaders, we generally permit our staff to integrate AI into their daily tasks, although we do impose certain restrictions on its use. Our stance on Generative AI is based on three key principles:

 

  1. Generative AI should serve as a support tool for employees, rather than performing work for them.
  2. All established company policies are applicable when using Generative AI.
  3. We do not release customer IP or personal data to AI models that use this data for training.

 

Employees are expected to consult this Acceptable Use Policy or reach out to designated internal contacts when in doubt about the suitability of AI in specific situations. All AI usage by our team must conform to legal standards and must not be employed for unethical tasks.

 

AI Internal Contacts

David Lyndon, the Chief Information Security Officer, is responsible for managing AI usage within Reputation Leaders. This includes developing and revising AI-related policies. Employees unsure about whether their use of AI aligns with this policy should contact Laurence Evans (laurence.evans@reputationleaders.com) or David Lyndon (david.lyndon@reputationleaders.com) for further clarification on Reputation Leaders’ stance.

 

Data Privacy and Security

When utilizing AI systems, employees are required to adhere strictly to Reputation Leaders’ data privacy and security guidelines. This adherence extends to all facets of Generative AI use, and it encompasses compliance with our existing policies, including the Information Security Management Policy and company policies on confidential information, intellectual property, bias, harassment, discrimination, fraud, and other illicit activities.

 

Guidelines for Appropriate AI Use

Reputation Leaders encourages its employees to leverage AI to boost productivity. This section outlines ways AI can be appropriately utilized within the organization.

 

Human + AI Collaboration

All staff must be mindful of AI’s limitations and consistently apply their own judgment when considering AI-generated information. AI should enhance human decision-making processes, not substitute this. Our team members bear full responsibility for their work and must rigorously evaluate AI-produced outputs for accuracy before depending on them for work purposes. If unable to confirm the accuracy of facts provided by generative AI through reliable sources, such information should not be used for work-related purposes. The original sources should be referenced in the methodology or footnotes of any materials produced. Effective use of Generative AI includes designing prompts carefully, meticulously reviewing and adjusting outputs, and offering feedback to improve the tool’s accuracy.

 

Acceptable Uses

Employees at Reputation Leaders are permitted to use AI software for various general tasks and work types, including but not limited to:

 

  1. Answering general knowledge questions to deepen understanding of work-related topics.
  2. Generating ideas for ongoing projects, like designing questions.
  3. Creating formulas in Excel or similar applications.
  4. Composing drafts for emails or letters.
  5. Creating transcripts or summaries of meetings with specific participant consent.
  6. Summarizing online research or developing content outlines for comprehensive topic coverage. However, only content authored by Reputation Leaders employees can be included in the final output.

 

While Generative AI may be used to aid in drafting or revising documents and communications, it should not be used for final unchecked work or direct interactions with others within or outside Reputation Leaders.

 

Transparency and Accountability in AI Usage

Our employees must maintain transparency about their AI usage in their professional roles, ensuring stakeholders understand the role of technology in decision-making. They are accountable for the results produced by AI systems and should be ready to justify and explain these outcomes.

Furthermore, all content generated by AI chatbots must be properly cited, as must using AI-generated content as a resource for company tasks, with the exception of general communications like drafting emails. Due to the potential for generative AI chatbots to plagiarize content from their databases, including copyrighted material, any text created or partially created by a chatbot cannot currently be copyrighted, trademarked, or patented under Reputation Leaders’ name.

 

Prohibited AI Uses

Reputation Leaders maintains a policy of allowing AI in the workplace, but in certain scenarios, its use is strictly prohibited. As noted earlier, all existing company guidelines are applicable to Generative AI use. To illustrate this rule, the following are examples of prohibited activities:

  1. Incorporating text from an AI chatbot into any final work product without review and accuracy checking.
  2. Incorporating text from an AI chatbot into any legal document.
  3. Sharing Reputation Leaders’ confidential or proprietary information through a Generative AI chat or by any means into a Generative AI tool.
  4. Sharing confidential details about clients or partners within a Generative AI chat or through any Generative AI platform.
  5. Sharing clients’ or partners’ IP or owned data within a Generative AI chat or through any Generative AI platform without express permission.
  6. Sharing Personally Identifiable Information into a Generative AI chat or system.
  7. Employing Generative AI in an unprofessional or disrespectful manner, or to partake in discrimination, harassment, or other forms of improper conduct.
  8. Engaging in any actions with Generative AI that breach Reputation Leaders’ codes of conduct.
  9. Using Generative AI for unlawful activities, including but not restricted to fraud, intellectual property theft, and copyright violation.
  10. Allowing AI or any ML algorithm to run or create outcomes without supervision.

 

For absolute clarity, any entry of confidential data into these tools is banned, irrespective of the sharing method. This includes, but is not limited to, typing directly, copying, and pasting, uploading files, and using multimedia methods. Only publicly available information is permitted for entry into Generative AI tools. Employees uncertain about the confidentiality of information should seek guidance from Laurence Evans (laurence@reputationleaders.com) or David Lyndon (david.lyndon@reputationleaders.com).

 

Responsible AI Use

Employees are expected to handle AI tools ethically and responsibly, avoiding behaviour that could inflict harm, breach privacy, or lead to malevolent acts.

Employees are required to be proactive in identifying and correcting biases in AI systems and outputs to guarantee their fairness and inclusivity, ensuring they do not discriminate against any individual or group. It is also essential that these technologies are not used to produce content that could be deemed offensive, prejudiced, or damaging to others or to the organization. Engaging in such activities will lead to disciplinary procedures, which could include termination of employment.

 

 

Training

Employees engaging with AI systems must undertake the necessary training to handle these tools responsibly and efficiently. The Chief Information Security Officer is charged with delivering this training. It is also vital for employees to keep up with the latest developments in AI and be aware of the ethical implications that may arise. Training will take place alongside the annual ISO training.

 

Policy Review and Updates

Reputation Leaders commits to periodically reassessing this Acceptable Use Policy to ensure it stays up-to-date and in line with our approach to managing risks associated with Generative AI applications. We will inform all staff members of any amendments or updates. Reputation Leaders retains the discretion to modify this policy at any moment, without prior notice. We welcome employees to ask questions, seek clarifications, and suggest enhancements or modifications to this policy. We will continually monitor the AI context to stay informed and empowered to make wise, ethical decisions. The review of this policy will take place alongside the ISO audit of the Reputation Leaders’ Privacy Policy.

 

Acknowledgment

By employing Generative AI in their professional capacity at Reputation Leaders, employees confirm their understanding of this policy and their commitment to adhere to its stipulations.