AI Policy
Last Updated: May 2024
​
Introduction Overview of Included.AI Ltd’s AI Policy
Our AI Policy is designed to ensure transparency, ethical usage, and compliance in integrating AI technologies across Included.AI Ltd. This policy is intended for all users and stakeholders of the Clu platform (“Clu”), including job seekers, organisation users, and developers.
This policy outlines how Included.AI Ltd uses AI to enhance user experience and operational efficiency. It covers the specific functions of CluGPT, our integration with OpenAI's Large Language Models (LLMs), and CluAI, our proprietary algorithmic data engine. The policy also details our commitment to data privacy, ethical guidelines, bias mitigation, and continuous monitoring.
Included.AI Ltd’s Chief Technology Officer (CTO) is responsible for enforcing our Ethical AI Guidelines and ensuring compliance with the outlined regulations and procedures. If you have further questions or need additional information regarding this policy, please contact our CTO at cayelan.mendoza@getaclu.io.
​
Purpose and Scope
Included.AI Ltd utilises the Large Language Models (LLMs) of data processor OpenAI for non-core helper functions in Clu, and this integration is called CluGPT. The purpose of CluGPT is to assist users in carrying out time-consuming tasks faster and more efficiently, including:
-
Sorting imported data to pre-populate user input fields (e.g. pre-filling work history fields when importing a CV).
-
Suggesting alternative wordings to help remove jargon and aggressive language (e.g. speeding up the process of writing a good job spec).
-
Recommending simple questions that could be asked to understand how somebody applies or has applied particular skills (e.g. speeding up interview preparation).
-
Finding inferred relations between particular words that are too subtle for word-matching algorithms to detect (e.g. “By X, did you mean Y?”).
In all cases, CluGPT output is in the form of suggestions that a user can choose to accept. A user does not need to use CluGPT to carry out any core functionality in Clu.
CluAI, our in-house algorithmic data engine, handles core platform logic exclusively. The purpose of CluAI is to perform specific calculations based on purpose-built first-party data sets to ensure objective and explainable results, including:
-
Associative data triangulation for calculating the prominence of particular data values under certain conditions e.g. skills vs demographics vs locations vs job types.
-
Preference and suitability matching, e.g. calculating match scores and whether user preferences and permissions align.
-
Flagging data quality issues, e.g. a candidate with senior experience but no work history on their profile, suggests a possible data entry error.
-
Probability-driven tips and gamification, e.g. “Clu users with a personal blurb on their profile are X times more likely to progress to interview”.
​
Compliance and Regulations
To ensure the privacy of our users, CluGPT utilises:
-
Encrypted data transmission to private endpoints only.
-
The use of pseudonymisation as stipulated in Chapter 1 Article 4 GDPR (e.g. an imported CV may contain the user’s name and contact information, which we will replace locally before sending to CluGPT).
-
Signed agreement with our data processor, OpenAI, regarding compliance with the obligations outlined in Chapter 4 Article 28 GDPR (i.e. assisting in the fulfilment of data subjects’ data protection rights and ensuring data processing security).
CluAI is strictly first-party and does not share any data externally.
A job seeker user can exercise their right to be forgotten as part of the account deactivation process. This will immediately delete their user record and anonymise their candidate record. This will also automatically exit them from employer talent pools and withdraw pending applications.
An organisation user has no personal ownership of data, rather it belongs to their organisation. They can have their account deleted by an admin user within their organisation, or by Clu Support, however any records created by or attributed to them will remain intact.
Ethical Guidelines
Human oversight is critical in decision-making processes where AI is used. CluAI and CluGPT circumvent ethical conflicts by assisting user workflows only and leaving decision-making to humans. They also use non-user-customisable queries to prevent user input from corrupting their intended usage.
A feedback mechanism is available in Clu for users to report any concerns or issues with AI-generated suggestions. This contributes to ongoing ethical considerations, which are logged in our Ethical AI Risk Register and reviewed periodically in Included.AI Ltd.’s Corporate Governance meetings.
Transparency and Explainability
Whenever CluGPT is used in Clu, we explain what it is doing to the user (e.g. on-screen alerts such as “Clu is thinking about questions you could ask”).
Whenever CluAI is used in Clu, it provides feedback to the user on what has occurred through system alerts/toasts and visible UI changes (e.g. adding an additional skill to a talent pool will trigger an immediate recalculation of who is suitable, keeping the user informed as to what the technology is doing at all times).
CluGPT and CluAI's performance and reliability are measured and reported regularly. Specific examples of performance and reliability metrics include accuracy rates, response times, and user satisfaction scores, which are detailed in our Performance Metrics Report.
We update our company's board of directors on the frequency and nature of these metrics and ensure users are informed about our AI systems' ongoing improvements and reliability.
Bias Mitigation
We do not allow users to input questions or their own instructions into CluAI or CluGPT. Instead, queries are pre-determined with specific parameters outside the user’s control to ensure CluAI and CluGPT are always used as intended. This, in combination with our ethical guidelines around assisting but never deciding on behalf of humans, allows us to circumvent AI-informed decision-making bias entirely.
Data Governance
All Included.AI Ltd data is stored on GDPR-compliant servers based in the United Kingdom, and access is sandboxed by user persona and organisation (e.g., compromised login credentials from one organisation cannot be used to access another organisation's data or another user type). Two-factor authentication is mandatory for all users, and trust can only be established with one device at a time. Users will be automatically logged out after a fixed period of inactivity, and device trust has a maximum lifespan of two weeks.
CluGPT processes but does not retain any data and can only be given access to data by explicit user instruction, e.g. clicking a button to initiate an action. CluAI does not share any data externally.
Our Data Management Policy clearly defines data retention and deletion policies for data processed by CluGPT and CluAI. It outlines retention periods for different types of data, criteria for data deletion, and the process for users to request data deletion or access their data.
Access controls are also in place to protect sensitive data from unauthorised access, ensuring robust data protection measures.
Consent and Control
Individuals who sign up to Clu are presented with terms of service to which they must agree before signing up, and organisations who subscribe to Clu must agree to terms of service as part of the purchase process.
Information sharing in Clu is strictly opt-in. This means a user’s information will not be shared with other Clu users or third parties without explicit permission, e.g., lodging a job application, accepting an invitation, following a company, etc. Users can revoke access to their data at any point in absolution, for example, by withdrawing applications, unfollowing companies, deactivating their accounts, etc.
As outlined in Clu's terms of service, a limited number of Included.AI Ltd's employees can view and administer user data for technical support and account management purposes. Our Privacy Policy provides more detailed information on how users can manage their consent, including how they can review, modify, or withdraw consent for data processing and AI use.
Accountability and Responsibility
Included.AI Ltd’s CTO is accountable for enforcing our ethical AI guidelines and our Board of Directors is accountable for the governance surrounding them. The CTO has final approval for merging development branches into Clu’s source code, allowing them to prevent non-compliant code that is not captured by checks and balances earlier in the development process.
Clu has service agreements with its infrastructure partner, SiteGround, and its data processor, OpenAI, for handling compliance issues outside the Clu ecosystem, such as updating server software to comply with new regulations.
Continuous Monitoring and Evaluation
Included.AI Ltd conducts routine functionality testing as part of its CI/CD pipeline to ensure Clu works as intended. This includes a suite of manual and automated tests of individual functionalities and workflows prior to each software release and a sandboxed test system and database for development purposes to ensure untested or experimental functionality does not interact with live data.
Quality and structure audits are carried out on Clu data between development sprints, and data purges and refactoring occur frequently to address issues or inconsistencies found in these audits.
AI impact assessments are conducted bi-annually to evaluate the social, ethical, and privacy implications of AI use in Clu. And third-party audits are conducted annually to ensure compliance with ethical standards and regulatory requirements, providing additional assurance to users and stakeholders.
All reports, testing and assessment findings are reviewed by Included.AI Lt’s Board of Directors before submission and are tracked via the company’s Ethical AI risk register.
Training and Awareness
Included.AI Ltd. provides training and guidance internally and externally to help users understand how to use CluGPT and CluAI effectively.
We have internal guides for developers to ensure they build Clu according to our established coding standards and follow our development principles. Adhering to coding standards and following our principles of development are linked to staff performance metrics so we can ensure awareness stays front and centre of our developers’ minds.
For external users, support channels, including a helpdesk, FAQs, designated training and onboarding sessions and user manuals are available for users, with specific documentation and support for those who encounter issues or have questions about our AI features, ensuring they can receive assistance promptly.
The content of this training and guidance is revisited on a bi-annual basis.
Third-Party Partnerships
CluGPT is an OpenAI integration that uses a private authentication token, SSL encryption, and data payload checksums to ensure safe data transmission.
When choosing a third party to partner with, we require that they:
-
Do not use our data for their own purposes or share it with any other parties.
-
Provide sufficiently secure infrastructure for safely sending data to and receiving data from them.
-
Have server locations in the jurisdictions where we operate.
We review any changes in third-party policies whenever they notify us so that we can decide whether to continue working with them or if the changes do not meet our requirements.
Complaints and Grievances
We provide users with in-platform feedback forms and direct email access so they can easily raise questions and concerns. If a security incident or breach is identified, we will follow ISO 27001 guidelines in handling it and report to the ISO as needed.
Included.AI Ltd has a detailed incident response plan in place to handle security incidents or breaches. This includes:
Step-by-Step Procedures:
-
Identification: Detect and identify the nature and scope of the incident and its priority (e.g., priority 1 for no platform or feature access, priority 2 for limited platform or feature access, priority 3 for access/feature usage being possible with a workaround, priority 4 for inconveniences, etc.).
-
Containment: Take immediate steps to contain the incident to prevent further damage. This includes isolating affected systems and stopping any ongoing malicious activity.
-
Eradication: Eliminate the incident's root cause, such as removing malware or closing vulnerabilities.
-
Recovery: Restore affected systems and data to normal operation, ensuring no vulnerabilities remain.
-
Post-Incident Review: Conduct a thorough review of the incident to identify lessons learned and implement improvements to prevent future incidents.
-
Documentation: Ensure company registers are updated and approved by the Board of Directors.
Notification Timelines:
-
Internal Notification: Inform relevant internal stakeholders within 24 hours of identifying the incident.
-
User Notification: Notify affected users within 72 hours of identifying a significant data breach, providing details about the nature of the incident, the data affected, and steps taken to mitigate the impact.
-
Authority Notification: Report to relevant regulatory authorities as required by law within the specified timelines (e.g., within 72 hours for GDPR compliance).
Roles and Responsibilities:
-
Incident Response Team: A dedicated team responsible for managing and responding to incidents, including representatives from IT, security, legal, and communications.
-
Incident Manager: A designated individual responsible for overseeing the incident response process and ensuring all steps are followed.
-
Communication Lead: Responsible for coordinating communication with users, stakeholders, and regulatory authorities.
Consequences of Non-Compliance
Internal non-compliance with our AI Policy may be considered gross misconduct, which is grounds for disciplinary action up to and including termination. We will suspend systems access to mitigate risks while we investigate internal incidents of non-compliance and notify any impacted users.
In the event of non-compliance from a third-party vendor, we will follow their complaints and support process, and depending on the impact and severity of the non-compliance, we may seek to terminate our relationship.
Policy Review and Updates
This AI Policy will be reviewed annually to ensure it remains updated with technological advancements, regulatory changes, and user feedback. The CTO is responsible for conducting the review and incorporating necessary updates.
By incorporating these areas into the AI policy, Included.AI Ltd reaffirms its commitment to responsible and ethical AI usage, robust data protection, and user support practices.