Predicting earthquakes with AI: Japanese researchers want to use AI to predict earthquakes and their spread. The researchers are trying to improve civil protection since it quakes almost every day in Japan. The goal is to predict earthquakes shortly before they erupt.
AI as a key to national security: Eric Schmidt, the former top executive of Google, has described artificial intelligence as a linchpin in global power struggles and warned that the United States could fall behind rivals like China unless it elevates AI as a cornerstone of national security. Schmidt has urged the public and private sectors to bolster the country’s AI capabilities.
GPT-3: an AI game-changer or an environmental disaster? GPT stands for the “generative pre-training” of a language model that acquires knowledge of the world by reading enormous quantities of written text. But what are the environmental costs of machine-learning technology? At the moment the only consensus seems to be that it’s a very energy-intensive activity, but exactly what the size of its environmental footprint is seems to be a mystery. This may be partly because it’s genuinely difficult to measure, but it may also be partly because the tech industry has no incentive to inquire too deeply into it, given that it has bet the ranch on the technology.
AI problems and challenges rooted in colonialism: Shakir Mohamed, a South African AI researcher at DeepMind, has been reflecting on what colonial legacies might exist in his research. In 2018, Mohamed penned a blog post with his initial thoughts. In it he called on researchers to decolonize artificial intelligence – to reorient the field’s work away from Western hubs like Silicon Valley and engage new voices, cultures, and ideas for guiding the technology’s development. The ties between algorithmic discrimination and colonial racism are perhaps the most obvious: algorithms built to automate procedures and trained on data within a racially unjust society end up replicating those racist outcomes in their results. There is also the phenomenon of ghost work, the invisible data labor required to support AI innovation, which extends the historical economic relationship between colonizer and colonized. Many former US and UK colonies—the Philippines, Kenya, and India—have become ghost-working hubs for US and UK companies. The countries’ cheap, English-speaking labor forces, which make them a natural fit for data work, exist because of their colonial histories.
AI requires international standardization: Standardization can determine what the respective technology can and should do – and what not. Since innovations drive the technical development in this way, uniform standards are particularly necessary in order to be able to examine the technology, said Christoph Winterhalter, head of the German Institute for Standardization.
Algorithm finds hidden connections between paintings at the Met: A group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Microsoft created an algorithm to discover hidden connections between paintings at the Metropolitan Museum of Art (the Met) and Amsterdam’s Rijksmuseum. The new “MosAIc” system finds paired or “analogous” works from different cultures, artists, and media by using deep networks to understand how “close” two images are. To find analogous images between different cultures, the team used a new image-search data structure called a “conditional KNN tree” that groups similar images together in a tree-like structure. To find a close match, they start at the tree’s “trunk” and follow the most promising “branch” until they are sure they’ve found the closest image. The data structure improves on its predecessors by allowing the tree to quickly “prune” itself to a particular culture, artist, or collection, quickly yielding answers to new types of queries.
Digitization in SMEs: Industry 4.0 is funded with another 2 million euros industry-of-things.de
AI on the stock exchange: successfully invest etf-nachrichten.de
Computers will soon be smarter than humans analyticsindiamag.com
Public cloud environments full of security holes, according to a study it-daily.net
Implement data even better bigdata-insider.de
NUMBER OF THE WEEK
248 million personal online credentials have been leaked.
Digitization of medicine: The digitization plans of Germany’s health ministry focus primarily on AI development. A digital council was convened for this purpose. However, Stephanie Kaiser, founder of Heartbeat Labs, a company that helps medical startups bring artificial intelligence applications to market, explained that AI can do much more than allow patient data to be managed more efficiently. She said in the future, AI should be used in the body wherever drugs and therapies are applied.
Pushed the limits? US researchers have claimed that photos of faces can help AI determine if someone is going to be a criminal – with an alleged hit rate of 80 percent. The main problem with this study was the fact that the evaluated data came from police authorities. Hamid Arabina, a professor of computer science at the University of Georgia, stopped the study. He pointed out the problem that crime data is often subjective. In the US, people with black skin are checked by police more often and are therefore overrepresented in crime statistics. A system that learns to classify black people as criminal is particularly dangerous.
PROJECT OF THE WEEK
AI for maintenance work: The Hamburger Hafen und Logistik AG (HHLA) uses AI to create reliable forecasts for the service life and the maintenance work on harbor crane ropes. To test new application options, AI systems with different focuses and characteristics are used in several projects. Predictive maintenance is particularly interesting economically because the lifespans of ropes vary.
“Artificial intelligence will overtake us humans in five years.”
Tesla CEO Elon Musk has said that artificial intelligence will be vastly smarter than humans and would overtake the human race by 2025.
AI discriminates: For years it has been known that artificial intelligence puts people who are not white at a disadvantage. But AI is only as fair as the person who programs it. So the problem is not in the software itself, but in the development teams.