To reassure innovations are designed for safe, ethical, and inclusive adoption, and the risks are managed by the designer and the user equally.
To date there is no universal democratic regulatory framework for AI, there is no rights to human intelligence, and there is no moral code for ethical innovation.
Why Do We Need It?
With every new innovation we must thoroughly evaluate the business/personal disruption. For example, the AI Stock Market crashing and anomalies are a by-product of algorithm trading (60-70% of stock marketing trading is algo).
One of the major stresses of everyday living today is due to technology. New products, apps, and innovations are pouring at us and we are literally drowning in them. Big brands are automating their warehouse’s without considering the people.
Privacy and personal information are being breached by companies using CCTV and digital tags to monitor how much work you are doing and how long you are taking for toilet breaks. Large supermarkets with regular clientele can predict your next purchase, and profile your salary, marital status, kids ages and even your sex life.
Because AI gets abused by big corporations by mass-producing intelligent systems without consideration to the people who do the job, how do you control the emotional, social and financial impact by such innovation? If any risk is identified to the users and stakeholders, it should be shared between the innovator and the owner equally. Only then, we will be able to ensure fair accountability is upheld and the user is safeguarded from any harm.
Risk assessment tools and frameworks are not a nice-to-have but a must-have. Digital technologies have become the part of our everyday life, more correctly, part of us.