AI, APC AND CPD New RICS Guidance on RICS for APC candidates
- Jen Lemen

- 15 hours ago
- 8 min read
![]() | Jen is a partner and co-founder of Property Elite. |

Jen explores the new RICS guidance on AI for RICS APC candidates and Regulated Firms. Note also that AI is now a mandatory area for the 2026 CPD framework for all chartered surveyors [see Donna Best’s article in this issue]. |
What do you need to know for your final assessment interview, and how can you stay on the right side of the new RICS requirements?
Over the past 12 months, RICS has released a variety of new guidance on the use of AI by surveyors. This reflects the increased usage of AI by the industry; and, in turn, the associated risks that need to be carefully managed.
So, what is AI? AI, or Artificial Intelligence, is defined by the Collins dictionary as being, ‘a type of computer technology which is concerned with making machines work in an intelligent way, similar to the way that the human mind works’. This includes skills such as learning, creating, reasoning, translating, problem-solving and decision-making.
RICS has adopted the following definition of AI, which is more specific. It is ‘a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment’.
RICS further acknowledges that AI comes in many different forms and may be part of a wider technological solution.
There are three main types of AI:
Artificial Narrow Intelligence (ANI) – used to perform specific tasks and is not able to learn beyond what it is programmed to do. Examples include Apple Alexa, Apple Siri and Google Maps
Artificial General Intelligence (AGI) – a hypothetical concept which would be used across wide contexts and able to learn and adapt to new situations
Artificial Super Intelligence (ASI) – a hypothetical concept which would operate beyond human-level intelligence.
Both AGI and ASI are something that computer scientists are working towards creating, which will bring their own new raft of ethical issues to the world of AI.
We also need to understand the term, ‘generative AI’, as it is used extensively by RICS in the new guidance. Generative AI is used to create new, computer-generated content, such as images, written text, videos and audio. Example of this are ChatGPT or Google Gemini.
The RICS has commented and provided guidance on AI in a variety of places.
New 2026 CPD framework
The first of these is within the new 2026 CPD framework, which applied from 1 January 2026 onwards. It applies to all members of RICS – AssocRICS, MRICS and FRICS. It DOES NOT apply to APC and AssocRICS candidates, who need to continue following the existing requirements in the candidate guides, and record CPD using the online assessment platform.
The new framework defines three mandatory areas of CPD, the first two of which are relevant to the use of AI:
Ethics and professionalism
AI, data and technology
Sustainability.
This mandatory CPD must be structured learning and last for a minimum of 1 hour. These topics must be covered on a three yearly rolling basis.
New AssocRICS and APC candidate guides
Moving on, RICS published new AssocRICS and APC candidate guides in 2025. In terms of AI and plagiarism, RICS states that candidates cannot use AI tools to write their submissions. This includes content generating tools (i.e., generative AI) such as ChatGPT.
However, candidates can use proof reading tools to assist with grammar, spelling and word count reduction. Where a candidate has identified needs and a technological aid is required, they will need to submit a reasonable adjustment application for approval from RICS.
RICS will also continue to undertake plagiarism and AI detection checks as usual. An offence of this nature is extremely serious and the candidate will be subject to investigation and potential disciplinary action by RICS Regulation.
More recently, RICS has published a Practice Alert on the Use of RICS Designations, Status and Logos, and Integrity of Assessment Submissions.
This applies to:
RICS Members
RICS APC and AssocRICS candidates, counsellors and supervisors
RICS-Regulated Firms
Any individual or entity connected to RICS.
The Practice Alert was issued to clarify the use of AI around the RICS APC and AssocRICS assessments. This builds on the updated candidate guides mentioned above.
In the Practice Alert, RICS clarifies, once again, that the use of generative AI tools is not permitted. Any use of or reference to other sources must be clearly attributed to avoid plagiarism.
RICS uses Turnitin to identify AI use and plagiarism and will reject submissions and/or overturn assessment outcomes where issues are identified. Furthermore, RICS Regulation may become involved in serious cases of AI use or plagiarism. Investigation and regulatory action, in the worst case scenario, could prevent a candidate from becoming a Member of the RICS.
In a wider context, RICS has published a Professional Standard on the Responsible Use of AI in Surveying Practice, 1st Edition.
In support of the new guidance, RICS Acting President Elect, Maureen Ehrenberg, stated:
‘Artificial intelligence offers real promise to the surveying profession - but only if used responsibly and ethically. This standard ensures surveyors remain at the forefront of innovation while protecting clients, data, and public trust. It supports the profession’s adaptation to rapidly advancing technologies while reinforcing the core role of the surveyor - to provide trusted, independent, and ethical advice. This initiative reflects RICS’s broader mission to uphold the highest technical and ethical standards across the built and natural environment, ensuring innovation is aligned with the public interest’.
The guidance is structured into five key sections:
Introduction
Baseline knowledge for using AI in surveying
Practice management
Using AI
Development of AI.
The Professional Standard specifically relates to AI outputs that ‘have a material impact on the delivery of surveying services’. Where this is the case, the RICS Member or Regulated Firm must record in writing the decision to use AI and why this decision was made.
RICS starts by saying that they are supportive of the use of AI that drives the profession forward, provided that this use is balanced with the inherent risk of these systems. The use of AI, therefore, needs to be responsible and in the public interest.
Through the new Professional Standard, RICS aims to:
Upskill the surveying profession in the responsible use of AI
Minimise the risk of harm through the use of AI
Enable informed decisions to be made on the procurement of AI and the reliance on the outputs of AI systems
Explain the use and risks of AI with clients and other key stakeholders
Provide a framework for the use of AI by surveyors.
The baseline knowledge required for the use of AI in surveying is the focus of Section 2 of the Professional Standard. Where AI is deployed, surveyors must have sufficient, relevant knowledge to enable responsible use of the systems.
This includes:
Different types of AI system
How AI systems work
Limitations of AI systems
Failure modes (i.e., why an AI system might fail to perform its specific function)
Risks and why an AI system might produce an erroneous output
Inherent risk of bias
Data use and data risks.
RICS covers a variety of topics in relation to practice management and AI, including data governance, system governance and risk management.
Key ways to ensure proper practice management around the use of AI include:
Clear policies and procedures around procurement and responsible use
Risk identification in the early stages of AI adoption
Safeguarding of private and confidential data, e.g., encryption, back-ups, restricting access, staff training, anonymisation and consent
Written assessment to consider whether the use of AI is the most appropriate tool in the particular circumstances
Regularly reviewing any written assessments around the use of specific AI systems
Written risk register recording which AI systems are in use (see sections 3.2 and 3.3 for full details of what is required) and the related risks. This could be accompanied by a SWOT or PESTEL analysis.
RICS Regulated Firms must carry out comprehensive due diligence before AI systems are adopted for use. See Section 4.1 for a full list of what must be considered before an AI system can be formally adopted for use.
RICS also confirms that surveyors must use their professional judgement in relation to the reliability of outputs from AI systems. This includes using your knowledge, skills, experience and professional scepticism – a term that valuation surveyors will already be aware of from the latest edition of the Red Book.
The decision around the reliability of the AI system must be recorded in writing, with assumptions and any concerns explicitly stated. Section 4.2 provides a full list of what needs to be considered and documented.
RICS Regulated Firms and Members will need to adapt their terms of engagement if an AI system is adopted. This will include when AI will be used within the surveying process, the extent of Professional Indemnity Insurance cover for the use of the AI system, relevant internal quality assurance processes around the use of AI, redress mechanisms in the event of a client concern and opting out.
Clients will also be able to request further information of the use of AI in the delivery of their contracted surveying services. Section 4.4 confirms what written information may be requested by a client, as a minimum.
Some surveyors and firms may decide to develop their own AI systems, rather than adopting those developed by third parties.
All provisions of the Professional Standard equally adopt to the development of AI systems as they do to the adoption of AI systems. RICS outlines a number of additional provisions in Section 5 that need to be considered and recording in writing, such as writing a sustainability impact assessment for the proposed AI system, recording stakeholder involvement, and introducing sufficient policies and processes.
AI and valuation
And, finally – how is AI being guided in the valuation profession? The current edition of the International Valuation Standards at IVS 105 addresses the selection and use of valuation models (AI) to be used in the valuation process.
It states that ‘no model without the valuer applying professional judgement, for example an automated valuation model can produce an IVS-compliant valuation’.
The current edition of the RICS Valuation – Global Standards (2025) at PS 1 requires that a valuation based on the output of artificial intelligence must be subject to application of professional judgement by a valuer to be regarded as a written valuation.
The Red Book does not prohibit the use of AI, but the use should be subject to professional judgement, terms of engagement, investigations, reporting, and records appropriately and proportionately considering:
Confidentiality
Intellectual property
Data and input verification
Appropriate assessment and professional judgement in relation to any process and/or model outputs
Transparency with the intended user(s) of the valuation
All other ethical, technical and legal matters referred to in the Red Book.
Conclusion
Thus, in conclusion – AI has many possibilities and benefits for surveyors, but its adoption must be considered ethically and professionally. For the APC, the use of AI is a complete NO and candidates need to write and prepare for the APC solely using their own words, experience and knowledge.





Comments