Gedanken zu digitalen Themen

Buchclub

Das Team ist während seiner Lernreise immer wieder auf interessante Gedanken und Literatur gestoßen. Einige davon werden hier vorgestellt.

Buch Rezensionen

Die Rezensionen sind jeweils in der Originalsprache der Bücher verfasst.

Did you always want to go beyond the buzzwords (“Big Data”, “Machine Learning”, “Support Vector Machines”, “Neural Networks”, …) and actually understand the basics behind these concepts and approaches? Do you want to refresh your memory and get actionable insights?

Then “Data Science for Business” from Foster Provost & Tom Fawcett is for you!

Mind you: This is not an easy read. The book is targeted at 1) people who will be working with data scientists, data science projects or data science organizations; 2) developers of data science solutions and 3) those of us who want to take a solid step on their career path towards becoming a data scientist.

Mathematics do feature, but not to the extent that it becomes a burden or hard to understand. The book takes its time (and space: ~350 pages) to really guide the reader through the underlying concepts, the related math, the typical challenges and suitable related use cases (applications). The link to the related business applications is never lost and becomes tangible through a number of well-illustrated examples.

As with any book on the subject, it cannot cover everything. But if you are somebody who wants to understand data science to a deeper level than just buzzwords and want to get a working knowledge of the approaches (even if you will not be implementing these yourself), I can highly recommend „Data Science for Business“. Ten, maybe twenty pages a day will keep you interested, involved and looking forward to the next day for more learning. But don‘t try to rush through: This book will require (and captivate) your attention. It is worth the effort.

The book covers all the typical algorithmic classes and hence provides a solid overview, including input on where and why the algorithms might create value. It also explicitly states, what it doesn‘t cover (reinforcement learning and GANs being two notable exceptions). Overall, the book certainly helps to mentally „sort“ the landscape of machine learning approaches.

When I purchased this book, I was thinking along the lines of a “Machine Learning for dummies” book, suitable for any educational background. That expectation was not fulfilled. “The hundred-page machine learning book“ requires a general (if not solid) understanding of matrix and vector calculations and mathematical notations, not to forget a general mathematical mindset to be able to follow the often very condensed explanations on dozens of different algorithms and approaches.

Which got me thinking about the target group for „The hundred-page machine learning book“. The author (Andriy Burkov) aims at both beginners as well as experienced practicioners for a quick refresher or brainstorming before a project. The latter should work fine. The former maybe not so much. For those of you who have worked with statistics, regression and machine learning in the past and want to get a quick refresher, see what you might have missed so far and learn some new things along the way, this book fits the bill. If, however, you are relatively new to machine learning and data science, I would suggest to pick up a somewhat more lengthy book that provides additional explanation and a more gentle learning path (for example „Data Science for Business“, which I will review at a later point in time).

The article by Laura Sanders delves into the current status and the implications of so-called “Brain-Computer-Interfaces” (BCIs).

The technology has been developing fast, with one example being Neuralink, a company that is inserting thousands of tiny electrodes in brains to not just read brain activity, but also to influence brain activity. Both directions are creating a plethora of ethical concerns – but also promise a lot of advantages, e.g. by helping people whose bodies no longer function well by letting them steer robotic support systems just by brain activity.

Of course, the inner part of our skull is still where we enjoy basically limitless privacy. Making the brain “readable” through BCIs coupled with Artificial Intelligence to detect patterns and in the end decipher thoughts and activities, hence can sound very scary indeed. And of course, influencing the brain to feel a certain urge or emotion could be used in many nefarious ways (although, if we are realistic, this already happens today to a certain extent through drugs, pharmaceuticals, visual and olfactory nudges, etc.).

The technology is still developing, but this development progresses fast. And humanity is more than once shown, that if something is technically possible, it will most likely test and deploy it. Our “data privacy” discussions of today might seem like childsplay compared to our debates a few years down the road. How to balance the good and the potentially bad of any new technology once more will become a challenge. But this time the discussion will revolve around things awfully close to the core of our being.

The book “Monetizing Data” delves into how to adjust (or re-invent) business models based on making use of data, for example through AI.

With the trend of “digital” a steep incline of available data has resulted, which in turn allows for continuous performance monitoring, holistic solutions and even addressing and optimizing ecosystems as a whole instead of just parts of the value chain. These new possibilities are enabled through data and data analysis, but B2B companies are often very much focused on the traditional business models of “assets for money”. How to align this approach with the notion of data and insights having a monetary value?

When looking at an ecosystem vs. a linear value chain, value constellations are generated, which can no longer be approached by utilizing data of a single player alone. Symbiotic relationships and co-incubation with suppliers and customers are key, as “the odds are stacked strongly against a solo play”. The critical data is most likely dispersed throughout the ecosystem, along with a complex set-up of who owns the data, and who is authorized to access and use it. Together with the notion of performance-based pricing (as a lofty goal), where the outcome at the customer (and not just the transfer of assets) is the basis for assigning value, it quickly becomes obvious that trust and cooperation between multiple players in an ecosystem is of highest relevance.

The book guides the reader through the eight steps of the roadmap, from the value constellation to scaling, continuously making clear that leveraging these new opportunities is critical to thrive (or even survive) in the business of tomorrow. Hence more than a few words of caution are given. For example, B2B companies often only focus on their peers – ignoring that indirect competitors are bursting onto the scene, often from completely different industry segments. These might well turn out to be the main competition a few years down the road. A similar shift in focus can hold true for the customers: The current key accounts might not be the best partners for moving into data-monetization business models. Furthermore, as in any constellation, there will be winners and losers. In many an ecosystem, the chemical industry does not have the acceptance of natural ownership for steering that specific ecosystem. Other roles will remain viable, however might suffer from margin erosion. Finally, monetizing data will not be possible as a “bolt-on” approach but needs a broader learning across the company.

Get ready for the business side of data with “Monetizing Data”. This will be a complex and lengthy journey, so we better get started!

In the book, the concepts of creativity are looked at mostly through the lenses of mathematics and music (the most mathematical of arts). The author is a mathematician and in his profession creativity is key to bring structure and story to the mathematical world.

Recently, headlines have been made by sales of graphical artwork generated by Artificial Intelligence (AI). Other creative endeavors by AI have been in music and text. Consequently, “The creativity code” tries to explore, whether “algorithms can compete meaningfully with the power of the human code” when it comes to creativity.

Creativity is hence not limited to just painting, writing, etc. Instead, creativity is defined as “the drive to come up with something that is new and surprising and that has value”, before distinguishing in exploratory creativity (starting from something existing and working from there), combinational creativity (combining two different art constructs) and transformational change (dropping some previously held constraints to move into new areas). In all three cases, the human brain is triggered by patterns: Abstract structures that underpin the apparent chaos of the world around us. As a result, when creating art, humans use two competing brain systems: One that just creates (hypothesis, ideas, visuals, sounds, …), with a second that mediates, holds the first system in check, dials it back in to arrive at something that pleases, something that conveys logic, structure and meaning. This is in some way similar to the workings of Generative Adverstial Networks (GANs), that also function as a competing system to arrive at a sweet spot. Hence, when employing competing AI systems for creating e.g. art, these are aptly termed Creative Adversarial Networks (CAN). The goal of these CANs is to arrive at an amount of “newness” (for example through inserting a bit of randomness into deterministic systems) that resonates with humans, while not becoming “too new” for humans to lose interest or even reject the generated art.

The author concludes that machine creativity leading to human-level output might only be possible with machine consciousness and embodiment, to allow the machine to “experience” and derive meaning. In the end, AI is not yet able to match human creativity, but it certainly can be an enabler and supporter. And that role should not be underestimated. Humans were only able to get to the moon by making use of machines. At some point, taking the next step in any field might involve accepting (and where necessary inventing) the right help. The field of creativity is no exception.

Start of the new year by delving into human and machine creativity by reading “The creativity code”.

A fascinating read on how humans learn and how current machine learning algorithms are copying these approaches in silico.

As the title suggest, the main content of the book focusses not on artificial, but on biological intelligence. That being said: the last few years have shown that algorithms – and algorithm stacks – are copying the brain architecture and process of learning to an ever-increasing extent. For example, the four main pillars of learning (attention, active engagement, error feedback and consolidation) are increasingly being incorporated in machine learning.

The book starts off by showing that the human brain is not a blank slate at birth. Instead, babies are born with a broad set of assumptions about the world, incorporated in the structures of their brains. What does that mean? Well: Any learning machine, be it biological or artificial, needs to have the right architecture for learning a specific task. As a result, numbers and math are processed at a different location (and with a different architecture) in the human brain compared to visual stimuli. The brain has many different specialized areas with distinct differences in its architecture. Hence, “in computer-science lingo, one may say that genes set up the hyperparameters of the brain; the high-level variables that specify the number of layers, the types of neurons, the general shape of the interconnections, …” The tool (the brain) has already been finetuned to the various tasks before the child has even been born. Plasticity subsequently allows learning on this pre-set architecture by changing the connection (synapses) strengths and numbers between the neurons. In the young brain, synapses are created and destroyed to the tune of several million per second! The job of the child’s environment is to influence which connections to keep and which ones to delete. This, in effect, is learning.

Probably the most relevant insight for me personally came towards the end of the book: The debunking of the generally held belief, that machines are data hungry and humans are data efficient. This is only partly true, depending on how you view the topic of "synthetic" or "simulated" data. Let me explain: Sleep is not just needed to strengthen existing knowledge, or to recode the learnings of the day in a more abstract and generalized form. Instead, every night, the ideas, thoughts and experiences from the preceding day are reactivated hundreds of times at an accelerated rate. This approach allows “gathering, synthesizing, compressing, and converting raw information into useful and exploitable knowledge.” So the brain uses its own generative models to synthesize synthetic data which is then used to train itself on. Our brain is data hungry after all!

Pick up the book “How we learn” to understand how your brain learns, how it makes (generalized) sense of the world, and how machine learning is copying human learning. Definitely worth your time.

While generally being seen as slow, bureaucratic and inflexible, the German state apparatus is at the same time being trusted by a large majority of citizens, as well as being perceived as highly competent. The book “Neustaat” tries to pry apart how to keep the good, but weed out the shortcomings to prepare for the digitally enabled future – actually, not even the future, but the now.

Thomas Heilmann and Nadine Schön (together with another 60+ contributors) embark on a tour-de-force, covering a wide range of topics from pensions, to money, education, technologies (of course Data and Artificial Intelligence are broadly covered), climate and infrastructure. But the authors are doing so in an accessible and logical manner. Along the way they propose 103 things that could drive the necessary transformation of the German state.

Core to their proposal is that Germany (and the underlying state apparatus) needs to transform into a “Learning State”, moving towards data-driven decisions, based not on data frugality, but data sovereignty and data prudence. Interesting concepts, like a data cockpit that automatically manages individual preferences on data collection and processing across services and platforms, are proposed. A lot of effort is also put forth towards clear rules on data protection, anonymization and who should come up with these rules. Most importantly, understanding of new(er) technologies and their implications would become a basic skill, taught already in school. Heilmann and Schön advocate for a culture change that encompasses not just the state, but the country. To quote “Innovation often comes from below, many eyes see more than two.”

So, for those of you that were thinking that a book coming from our state legislators would be dry and boring, I certainly have good news for you: “Neustaat” is a book that lays out a whole lot of challenges, but also inspires and provides hope for a great future.

„If work can be codified, it can be automated. And if it can be automated in an economic fashion, it will be.“ However, as the authors explain, unchecked automation can lead to serious downsides not just for the replaced workers, but also for the companies and its products: With automation comes a more inflexible process; With automation you lose the understanding of the process; With automation you become less distinguishable from your competitors.

With the challenge defined, the authors now lay out the solution for those of us who might worry about future employment. First of all, there are certain things that machines are not likely to do as well as we are (expert thinking, complex communication, ideation, social interaction…). And there might be things that we value higher if it has a human involved (artisan soap, a piece of art, a well written book). However, the main mantra of the book is that keeping humans in the loop is a win-win-situation. Augmentation, not automation! In augmentation, both humans and computers combine their strengths to arrive at a result neither of which could have achieved alone.

Davenport and Kirby continue by defining the five types of roles that are here to stay: Stepping up, stepping aside, stepping in, stepping narrowly and stepping forward. Reasons for the future existence of these job types, as well as typical jobs and character traits of the incumbents are being discussed. There is hope after all! And finally, the authors make a prediction that points to a bright future: „Managers will increasingly understand that the key to their firms‘ competitiveness is not the efficiency that automation provides but the distinctiveness that augmentation allows.“

Not always an easy read, the book does nevertheless a great service to those of us who want to be prepared to thrive in a more automated and augmented world. If you want to know what job role fits you best, go ahead and pick up a copy of „Only humans need apply”.

Without delving into mathematics, Russell provides sufficient background on what distinguishes AI from regular computing, where we stand today and where the journey will most likely lead us. Both voices unconcerned about the developments, as well as strong critics of AI are given due credit. While critical of the unchecked and practically rule-free developments of todays AIs, Russell recognizes the immense value that AI can bring. Hence, instead of proposing to stop developments that will lead to Artificial General Intelligence (AGI) or highly intelligent Artificial Narrow Intelligence (which honestly might not be possible to prevent anyway), he works through the various challenges arising at the shifting interface between human and machine. The most relevant of which being the continued human oversight of the newly arising (artificial) intelligent species.

One of the core ideas to address the problem of control is to no longer focus on clear objectives or explicit target functions for AI. Instead – argues Russell –, the objectives should be not fully clear, so the AI instead needs to maintain a feedback loop with the humans and hence allign its actions with human preferences and intentions. Moving away from the principles mentioned by Isaac Asimov in 1942, Russell provides a clear framework how humanity could bring forward a future that allows benefitting from the immense promises of AI, while keeping the risks in check.

Although having been in the field of AI (and the management of its risk) for decades, Stuart Russell has been able to craft a highly understandable and pleasurable read. Vivid examples are provided that drive home the core findings and make them memorable. For anyone relatively new to the field and wanting to understand both the current capabilities of AI, as well as the potential future capabilities and pitfalls related to intelligent machines, this book comes highly recommended.

According to the author, David Weinberger, originally (in our distant past), humans did not understand the world and tried to make sense by establishing correlations that often passed for causalities. This strong oversimplification of the world led to managing ‚everything‘ in models (chemical, physical, mechanical, electrical). However, these models are in themselves again simplifications. Everything affects everything, and trying to capture this in a single model is destined to fail.

While the title does not immediately offer the link to cognitive solutions like machine learning and artificial intelligence, the book itself does quickly drive home the point that these new technologies are disruptive to human development. Now, however, machine learning allows to learn on all and any data available, finding correlations and causalities with assigned probabilities. The problem with this newfound broad toolset is one of acceptance. Humanity has worked so hard to make sense of the world – and is not ready to accept that some of the resulting artificial intelligence works and operates in a way that is no longer cognitively accessible to us. Weinberger argues that this is okay. There is no causal need for us to understand tools like machine learning for all anticipated use cases. Of course, in some cases, we do want explainability. Specifically in the areas that involve ‚fairness‘. The interesting aspect here is that in many cases there is not one clear definition of ‚fairness‘. Hence codifying this is often inherently impossible. Machine learning hence all of a sudden forces us to rethink our moral guidelines. And this will actually lead us to redefine our morals. One quote sums this up pretty nicely: “We are at the beginning of a new paradox: We can control more of our future than ever, but our means of doing so reveals the world as further beyond our understanding than we’ve let ourselves believe.”

As many other books on cognitive solutions, the book covers both psychological as well as technical content. That being said, the book feels somewhat repetitive by making the same points through various stories. These don‘t make the messages less clear or less true, but they do tend to wear the reader out a bit.

Now, how this all affects the human psychology and results in television series like ‚Games of Thrones‘ would take up too much space in this review. If you want to know the answer, go ahead and read the book.

The Business Model Navigator is a research based methodology we applied in various industry backgrounds. It rapidly became a bestseller and was called a “sensation” by the Frankfurter Allgemeine Zeitung.

The book aims to better understand the key drivers of business model success, as well as fostering business model innovation via a structured approach. The research program on business model innovation, lead by Prof. Dr. Oliver Gassmann and Prof. Dr. Karolin Frankenberger, has been running since 2010. Dozens of PhD and Master thesis have been contributing to the program. The authors seek to enhance decision making in the context of business model innovation processes, facilitating the initiation, ideation, integration and implementation phase.

A strong business model is the bedrock to business success. But all too often we fail to adapt, clinging to outdated models that are no longer delivering the results we need. They have looked at 350 business model innovators and found out that about 90% of their innovations are recombinations of previously existing concepts, ideas, or business models. This insight can be used proactively, as these patterns provide the blueprints you need to revolutionise your business and drive powerful change. Based on groundb reaking original research, this book shares these 55 recipes for success, providing practical templates to help readers build new business models from scratch and supercharge their existing models.

This digital database, developed by Thomas Möllers and Camillo Visini, supports the book Business Model Navigator and is based on research at Institute of Technology Management at the University of St. Gallen and provided by the BMI Lab.