Ethics

ETHICS EQUALS TRUST.
Trust equals economic
value creation.

The Hypergiant Ethical Reasoning Model is based on trust. We believe developing and deploying ethical reasoning at every stage of the process will ensure trust by our team, clients, and society-at-large in products and services we create.

HOW DO WE BUILD TRUST?

Hypergiant - Office of Machine Intelligence

ETHICAL REASONING PROCESS

Hypergiant's Ethical Reasoning Process is designed to ensure a rigorous ethical review for each AI Use Case proposed. The three core steps are:

1) Establishing Goodwill (the use case has a positive intent).

2) Applying the Categorical Imperative Test (the use case can be applied broadly without a negative ethical impact).

3) Conforms to the Law of Humanity (the use case has a direct benefit for people and society).

HYPERGIANT'S DECISION MAKING FRAMEWORK

Hypergiant - Office of Machine Intelligence
STEP 01
GOODWILL
Does this choice reflect positive intent alignment with our company values?
STEP 02
CATEGORICAL IMPERATIVE
Can humanity survive another 100 years if we adopted the business model and habits of this use case?
STEP 03
LAW OF HUMANITY
What impact will we have on society if we deploy AI in this way? Are we using people simply as a means to an end?
"THE ACTUAL POSSESSION EXERCISE OF GOOD CHARACTER IS NECESSARY TRULY TO UNDERSTAND MORAL PRINCIPLES AND PROFITABLY TO APPLY THEM"
- ARISTOTLE / THE NICOMACHEAN ETHICS

AI CODE OF ETHICS

HYPERGIANT DECLASSIFIED CODIFICATION DOCUMENT #295E

Hypergiant - Office of Machine Intelligence

AI ETHICS ARE HUMAN ETHICS

Since most AI transformations are high stakes corporate initiatives, it is imperative that our clients and partners are prepared to incorporate ethical reasoning at the outset and throughout the entire process. Hypergiant's AI Code of Ethics are stepped in industry best practices and guided by the following principles:

HYPERGIANT CHIEF ETHICS OFFICER

Hypergiant - Office of Machine Intelligence

WILL GRIFFIN

CHIEF ETHICS OFFICER, HYPERGIANT\\DECLASSIFIED**

Will Griffin is the recipient of the IEEE 2021 Award for Distinguished Ethical Practices. He is the creator of Hypergiant's Top of Mind Ethics (TOME) framework which won the Communitas Award for Excellence in AI Ethics. His past entrepreneurial work has also earned him the prestigious IAB/Brandweek Silver Medal for Innovation and the culturally significant NAACP Image Award. He is currently a guest lecturer on AI Ethics in the University of Texas Department of Computer Science and at Penn State Law School. He was recently a TedX Speaker on Ethics.

Griffin is an entrepreneur and proven thought leader with a track record of bringing cultural-shifting content and technology to new platforms. He is a member of the IEEE P7000 Model Process for Addressing Ethical Concerns During System Design working group. His previous ventures include launching the first original content slate on the Sony PlayStation Network with Endemol North America. He launched the first original VOD channel, Hip Hop OnDemand, on Comcast Cable Systems. Griffin is a long-time board member of the national oldest multicultural agency, Uniworld (WPP) and was an in-house strategist at News Corporation (Fox Entertainment Group).

He was an Associate at McKinsey & Co. in the Entertainment and Media practice (Los Angeles) where we worked on operations and post-merger integration for the world's largest media companies including Time Warner and Disney. He began his professional career in the Asset-Backed Securities Group at Goldman Sachs in New York.

AWARDS AND PRESS

Industry Awards for Excellence in AI Ethics

Hypergiant - Office of Machine Intelligence

MEDIA: VIDEO AND PODCASTS

AI Ethics-Related Media and Podcasts

Hypergiant - Office of Machine Intelligence

HYPERGIANT ETHICS IS DRIVING THE FUTURE

through major ethics ai industry collaborations

Hypergiant - Office of Machine Intelligence
  • Finalist, Tech Pioneers Forum (Winners announced in 2021) (Google, PalAntIr, Spotify, Twitter are Alums. Confirmed with Kay Butterfield Jan. 2021)
  • Invited to Global Innovators Community
    https://www.weforum.org/communities/global-innovators.
  • Trained on Microsoft Responsible AI.
  • Advised, Government of India on Nation AI Ethics Strategy.
  • Invited to most WEF Events around the world.
  • HG AI Ethics Framework featured in 10 Gartner AI Reports as preferred vendor for digital Ethics.
  • Invited to present to Ethics Analysts and Higher Education Analysts.
  • Working group Member for the AI Ethics p7000 Standards for Autonomous Systems: Developing standards for engineers worldwide through bi-weekly meetings; Final Standards released by May, 2021.
  • Developing on worldwide, independent certification process for rating AI companies, products and services.
  • Invited to Spring 2021 UT AI Procurement Summit. March 24-26, 2021.

ETHICS IN AI PRODUCT:

hypergiant ethical decision-making workflow app: tome (top-of-mind ethics)

Hypergiant - Office of Machine Intelligence

ETHICS IN AI AS A SERVICE

Hypergiant - Office of Machine Intelligence
01
WEEK 01
DEVELOPING AN ETHICS IN AI STRATEGY.
02
WEEK 02
OPERATIONALIZING ETHICS IN AI INTO WORKFLOWS.
03
WEEK 03
ETHICAL DECISION-MAKING WORKFLOW TOOL TRAINING AND DEPLOYMENT.
ILLUSTRATION OF TOME ETHICS IN AI IMPACT ON COMMON AI ENGAGEMENTS
High Level
Specific
Task
Input/Output
Tome Vetting + Risk/Oncology
Security - Ethics Lens
Ethics Certification
Data science
Data collection
Convert real-world into data
Surveys, sensors, studies, 3rd parties
Bias, Implied Consent (Agency)
Secure data from adversarial attacks, Anti-social uses (ie. Redlining, Disparate Impact ex. policing)
Get data collection process rated
Data science
Data munging
Normalization and clean-up
Raw data > statistically useful data
Privacy
Equity, Prevent biased exclusions (ie. Credit Ratings based on Zip Code): Avoid Public Disclosure (like Experian Hack)
Certify clean-up process
Data science
Data mining
Extract findings
Analysis, segmentation, personas
Inaccuracy / Privacy / Discrimination
Transparency on conclusions while protecting personal data. Secure vs. intended and unintended consequences
Certify analysis process
Data science
Data visualization
Inform insights
Clustering, trends
Bias
Transparency on conclusions while protecting personal data
Certify analysis process
Data science
Dashboards & BI
Real time - inform decisions
Actionable trends & alerts
Bias
Protect vs. adversarial attacks at point of decision-making
Certify decision-making process
AI / Data science
Natural language processing (NLP)
Interpret text
Text, voice
Inaccuracy: Bias (accents)
Secure voiceprints (especially protected categories): Protect vs. Deep Fakes.
Certify NLP interpretation process
AI / Data science
Machine learning
Determine correct action
Well-specified problem, big data
Privacy, Bias, Transparency
Adversarial attacks
Certify ML process
AI / Data science
Deep learning
Prediction models
Big data > output verification
Bias, assess impact of models on sensitive populations
Adversarial attacks, nefarious uses of models
Certify DL Prediction models