Blog

Why Architectural and Engineering Firms Need AI Ground Rules

Chances are you’re already aware of AI’s explosive recent developments. News outlets are saturated with articles documenting groundbreaking AI advancements, and the AI industry is experiencing monumental growth.

Technology companies are continually releasing new AI tools, and increasingly businesses are diving into the AI waters. The architecture, engineering, and construction (AEC) industry is no exception. Organizations are using AI language models like ChatGPT to draft communications and proposals. Designers and architects are using generative AI to provide adaptive remodeling of floor plans and calculate thermal efficiency. The technology once prophesized to revolutionize how people work is here.

What lies ahead in AI for the AEC industry?

AI is projected to become more integral to AEC work. If you’re one of those firms looking to explore AI’s capabilities, consider creating guidelines around its use. While AI itself is neither good nor bad, individuals don’t always know how to use it responsibly. In the absence of an official AI policy, employees could inadvertently use AI tools to the firm’s detriment.

Here are some questions to explore when creating your AI policy. This list is likely to grow as AI becomes more popular, but starting with the following will be helpful:

Is AI information reliable?

AI language models have demonstrated a propensity for providing false information. It’s sound practice to vet work produced by AI tools. You may want to dip your toes before jumping in. Start with simple tasks before gradually feeding it more complex tasks.

If we use AI as part of our firm's services and reports, should we disclose this to our clients?

Some AI tools perform very basic services and calculations that are not of real consequence to clients. In these cases, it may not be necessary to disclose your use. If AI figures prominently in your work, you likely will want to disclose this to your client.

Can my firm be held liable if the AI platform we use samples protected intellectual property (IP)?

We have yet to see the legal ramifications of using AI to rework the IP of other parties. Lawsuits currently being filed against AI content creators for misusing copyrighted work indicate that regulation is on its way. In the future, we may recognize training AI with IP to be copyright infringement. It’s wise to have your firm's legal team assess the use of AI to create original work for the firm.

If sensitive data is fed to AI, can that result in a breach or lawsuit?

AI programs require input data to produce work. Many of these programs also record interactions to further train and refine their capabilities. Because of this, some organizations prohibit sharing sensitive data with third parties, such as generative AI programs. Feeding sensitive project details like government or military documents to an AI tool could result in costly litigation.


Staying flexible as AI develops

Creating guidelines to ensure employees use AI responsibly is a good start for firms looking to safely adopt this technology. AI regulations and best practices are likely to change, so you will need to revisit and update your company guidelines periodically to help mitigate risk.

Paying close attention to AI developments will be critical as AI use cases and regulations continue to evolve. Meeting with an insurance specialist to discuss your firm’s AI practices can help ensure that your firm has the coverage it needs.

AI tools have already proven to be exceptionally useful in the AEC space, and they’re certainly not in short supply. AI is here to stay, despite the controversy and challenges surrounding it. Firms that use these tools responsibly will see the greatest benefit from their services while facing the least potential risk.

Want to learn more?

Find Darren Black on LinkedIn, here.

Connect with Risk Strategies Architects & Engineers team at aepro@risk-strategies.com


About the author

For over 20 years, Darren S. Black has served architectural, engineering, and design-build firms as a professional liability insurance broker and risk management advisor. With previous experience as a litigator and coverage counsel, he provides a unique lens for looking at AI risks.