Digitization Is Changing Our Society At A Rapid Pace
Artificial intelligence will fundamentally change our coexistence and needs political rules of the game.
And while many people are beginning to understand this change, a new factor is already coming into play, bringing with it another, a much larger wave of change,
Artificial Intelligence (AI)
It significantly accelerates digitisation and changes the way digital devices behave.
It’s not about Terminator-like scenarios, but about the actual use of artificial intelligence at central points of our lives.
To safeguard freedom, justice and solidarity in the 21st century and make technical progress an advantage for all, new political rules of the game are needed for our digital coexistence.
The current debate about Facebook and the data scandal surrounding Cambridge Analytica shows,
We face global challenges, which five or ten years ago, only a few saw coming.
How free is our society in digital?
Is there more justice or less in digital than in real life?
And is there any such thing as digital solidarity?
These questions are all the more so when the systems are no longer just standard algorithms.
If AI systems exist in all areas of life, how can we ensure the freedom of individuals?
All major IT companies are engaged in intensive research in the field of artificial intelligence and are also using it in their products.
Thanks to AI, language assistants like Alexa or Siri can better understand and learn their owner’s input.
Facebook uses AI to analyse posts to determine if a user might be planning suicide.
In that case, for example, hints for counselling are displayed.
Google offers Magenta an AI system that can compose music.
Whether or not creativity can come out of it is probably a philosophical problem
These currently very small-scale examples of AI will expand suddenly to all areas of life over the next five years; often wholly unnoticed, because whether a system contains artificial intelligence is difficult to judge from the outside.
The collective term Artificial Intelligence refers to intelligence services that were previously provided only by humans, or animals and can now be done by machines.
Science fiction films deal with the so-called strong AI, in which machines act like humans and are virtually indistinguishable.
For the foreseeable future, however, the weak AI, which transfers individual human capabilities to the machine, is realistic.
For example, a system on images can distinguish dogs from cats or recognise and interpret human speech.
When a system learns something new, it’s called Machine Learning (ML).
One part of the ML that is currently the most interesting in the tech scene is deep learning. Here, a system learns structures independently and can also improve itself.
Thus, systems with artificial intelligence are typically useless at the beginning. However, if they are “trained” with a lot of data, they can later successfully perform a specific task and at the same time continue to improve.
This is also the fundamental difference of classic software
While these algorithms were generally deterministic, the AI systems are not.
The result of computation doesn’t depend on an algorithm, the data previously used for training is central.
This paradigm shift not only changes the way digitisation works and how it impacts but also increases the value of general and personal information.
Whether systems of artificial intelligence are fair or unfair depends to a large extent on the training data, and it is indeed possible to create systems with a kind of biased AI
Data thus continues to gain relevance and decide on the results of the AI systems.
Therefore, a general requirement to disclose the source code of the software is only partially helpful in the future.
Even if the source code is entirely open, machines with deep learning still rely on the data they used to train.
As it becomes more and more difficult to understand technical systems, new instruments are needed as well as specifications to maintain our core values even in a world of AI systems.
If AI systems are present in all areas of life, how can we ensure the freedom of the individual?
Here, transparency and labelling requirements are needed.
It must be clear when and where algorithmic decisions are used and also what data is used.
An Example Is The Medical Sector
If future computer diagnoses decide which therapy should be used in the event of illness, the patient must be informed.
It is also worth considering whether AI systems may be viewed as helpful in critical decisions, but final decisions have to be made by people and how that can be controlled.
Whether AI systems are fair or unfair depends to a large extent on the training data.
When facial recognition software is trained only on photos of people with fair skin, it has problems in detecting dark-skinned people.
In fact, it is possible to create systems with a kind of biased AI.
On the other hand, the same software, when trained with well-balanced data, could make completely different choices.
It is, therefore, necessary to check that appropriate systems are being trained with data that are as non-biased as possible
The verifiability and documentation of the learning units must be mandatory for state systems.
With private systems, the policy must create legal obligations, so that the principle of equal treatment is guaranteed in the future.
The fundamental values of our society are presented with new challenges by digitisation, especially by systems with AI.
The advantages of digitisation outweigh the disadvantages. Nevertheless, society and politics must deal faster and more clearly with these future issues, which are increasingly technical in nature.
Only with smart concepts and a positive approach can the fundamental values of freedom, justice and solidarity be safeguarded in the future, and the prosperity of all increased.
TensorFlow | Deep Learning And Machine Learning