Bootstrap

Responsible, Safe, Ethical, Human-Centered

AI Research & Development

Dr. Scott Allen Cambo h as dedicated his career to helping teams research, design, build, and monitor AI systems that are safe to use, responsibly developed, and trustworthy.

News & Media Appearnances & Mentions

Expertise

👷🏼

Responsible and Safe AI

Learn more about approach to helping teams develop responsible, safe, and ethical AI systems.

Natural Language Processing

While I have dabbled in many forms of AI, NLP systems are where I feel most at home.

Algorithm Auditing and Risk Assessment

Primarily with regard to NYC Local Law 144

Data & AI Governance

Being a good steward of sensitive data and the AI that is trained from it is a big responsibility. Check out these resources to learn more.

Quantitative Measurement + Qualitative Understanding

I have studied with some of the world's top experts in mixed methods research. Learn more about why mixed methods research is critical to understanding and developing good AI systems.

Designing Trustworthy AI Systems

With years of experience in Human-Computer Interaction research and AI development, I have developed a process for building AI systems that are not simply trusted by users, but rather AI that all stakeholders can agree is trustworthy.

Prototype, POC, and MVP development

Early stages of AI development are often hindered by an inability to distinguish between prototypes, proof-of-concepts, and minimal viable products. By understanding what questions each can answer and how, companies can save the critical time and money needed to see a project through.

Professional Leadership & Community Development

AI 2030 Senior Fellow

A Responsible AI Fellowship led by Xiaochen Zhang: "A diverse group of leading minds committed to shaping AI's future responsibly."

Responsible AI Licensing Initiative (RAIL)

Steering Committee Member & Co-Lead of the Tooling and Procedural Governance Working Group

Tech Better

Helping create a Responsible AI Maturity model to aid organizations in assessing the quality of the responsible AI initiatives and programs.

ARVA

Member of AI Risk & Vulnerability Alliance (ARVA) working group on developing a taxonomy for the AI Vulnerability Database

NIST GAI-PWD (Generative AI Public Working Group)

Catologing incidents where AI has been observed to cause harm is a critical step toward mitigating that harm.

FAccT '24 Program Chair

Aiding the development of the technical programming for the world's premiere conference on Fairness, Accountability, and Transparency in AI systems.

Academic Reviewing

I review for several ACM conferences (FAccT, CHI, CSCW, UIST, DIS, UbiComp) and SAGE Academic Journals like Social Science Computer Review