Innovation Fall 2025

ANNUAL CONFERENCE - CE OFFERINGS

Learn to identify bias, ethical issues in AI

Incorporating AI and automation in professional practice can assist engineers and geoscientists to be more efficient. However, professionals need to recognize the potential biases and ethical implications when using these tools. To address this topic, the ethical learning session “Addressing Bias and Ethics in AI and Emerging Tech” is being offered at the Annual Conference. Presenters David Slade, P.Eng., Simon Diemert, P.Eng., and Matt Murdoch, P.Eng., will teach attendees how to identify and mitigate bias, ensure transparency in algorithmic decision-making, and uphold public trust in a rapidly evolving digital landscape. “AI is a relatively new technology and there is currently much uncertainty surrounding the potential risks of using AI-based systems and tools in professional work,” Slade said. “It’s critical that professionals across

Major corporations and professionals across multiple industries have mistakenly relied upon the output of AI systems without appropriately understanding and addressing the risks associated with their use.

David Slade, P.Eng., Practice Advisor, Professional Practice

SCAN TO LEARN MORE .

all industries and areas of practice understand the risks associated with the use of these tools and how to mitigate these risks to an acceptable level prior to

ENGINEERING REAL ACTION Our traffic modelling has played a key role in delivering major transportation projects across the Lower Mainland and BC. Completed and fully operational projects like the Alex Fraser Bridge Improvements, Highway 91/17 Upgrades, and the R6 RapidBus are now improving traffic flow and driving regional growth.

employing them in professional practice.” Identifying bias in AI systems

Bias can be introduced into AI and automation systems across multiple stages. For example, if collected training data reflects historical inequalities or lacks diversity, the AI is likely to replicate those biases. As a result, an AI system trained to identify job candidates by using existing employee profiles may accidentally entrench industry biases by prioritizing or excluding candidates with gendered or racialized names. Similarly, developers may unintentionally embed their own assumptions. For instance, an AI-powered geological hazard-prediction model might fail to account for local knowledge or Indigenous data sources that may not be available in digital formats, creating unseen gaps that could put lives at risk. Likewise, a tool developed by and for a certain industry can prioritize solutions that benefit the industry or a particular company over other important considerations such as public safety. Slade said registrants should ask the questions to ensure the technologies they use operate with transparency, explainability, and interpretability.

26

Fall 2025

Innovation

Made with FlippingBook - professional solution for displaying marketing and sales documents online