Insight
AI and judgment: What can be automated, and what must remain human?

Author: Severin Sjømark
In the coming years, we will likely see a growing stream of self-declarations from tech giants claiming that artificial general intelligence (AGI) has been reached. What is actually meant by this will vary. The most common definition is that AGI refers to artificial systems that perform as well as humans across all domains and can generalize across them. Even this definition is imprecise, and in the time ahead the definition will become fluid, with whether a threshold has been crossed dominating the headlines.
For us at Deepinsight, the crucial questions are different: how is the technology integrated into society? Does it support human judgment, or does it begin to replace it, and who decides this?
Health as the domain of judgment
Health is one of the areas where judgment matters most, as the difference between good and poor judgment can be the difference between life and death. Judgment determines priorities, treatment choices, and use of resources, and it is what makes it possible for these decisions to uphold human dignity.
Judgment is not just the application of rules, but a fundamental human capacity: the ability to assess a situation as a whole, weigh considerations that cannot be fully quantified, and take responsibility for the consequences of a decision. Judgment is shaped by experience and lived life, by context and circumstances that we will never be able to fully replace with data processes. AI is trained on enormous amounts of human experience in the form of representations, but it is not itself in the situations (with bodies, emotions and human relationships) to which these representations correspond. It can analyze and optimize, but it cannot bear responsibility or relate normatively to what is right and good. Data can inform judgment, and models can support it, but judgment cannot be reduced to data and rules.
In a complex hospital environment, there are tasks that require precisely this capacity: prioritization under pressure, assessment of risk, handling uncertainty, interaction between people. This is where technology must be support and not substitute.
What should be automated, and what should not?
The distinction between tasks that can be automated and tasks that require judgment is easier to formulate as a principle than to implement in practice, because many tasks contain elements of both. At Deepinsight, we try to navigate this consciously. We automate complex routine tasks that do not require judgment: structuring information, optimizing schedules, removing manual bottlenecks. Where machines can do the work faster and more accurately without anything essential being lost, they should do so.
At the same time, we develop solutions that provide support and insight in tasks that require human judgment: AI can reveal patterns, point to risk, simulate scenarios. It can give decision-makers a better overview, but AI cannot take over responsibility.
This is in line with our AI strategy, which is built on responsibility and resonance. Resonance, as sociologist Hartmut Rosa describes it, is about people and their surroundings responding to each other meaningfully, about the relationship being alive rather than mechanical. In healthcare, this is the core of good care: that the patient is met as a human being in a situation, not as a data point in a system. Technology should help strengthen this relationship, not replace it. When we automate the routine, we free up time and attention, and the purpose of this is to make room for better judgment.
AGI, definitions and responsibility
It is likely that we will move into a phase where the boundaries between “advanced model” and “general intelligence” become increasingly unclear in public debate. But whatever terms are used, our responsibility remains the same.
Technology can become increasingly competent at bounded tasks: it can get better at predicting, optimizing and generating. The question is not only what it can do, but what it should do. If AI systems increasingly make decisions across society’s sectors, we must be clear that the normative, the assessment of what is good, right and desirable, cannot be outsourced. Our tools may become more sophisticated, but they must always serve human judgment.
Math for good
Our slogan is «math for good». The good is not something a model can define on its own; it is something we as humans must assess, discuss and take responsibility for. Mathematics, algorithms and models give us powerful frameworks, and they can help us see more clearly and act more precisely. The assessment of what is actually the right and good thing in a concrete situation is, however, our, human, task.
Therefore, our approach to AI is simple in principle, even if it is demanding in practice: we automate what does not require judgment, we support what requires judgment, and we protect the space where judgment must remain human. This also entails a responsibility to think about how the tools we design shape the conditions for learning for those who are to use them, not just whether they work efficiently today, but whether they help professionals continue to develop the capacity that makes their judgment valuable.
At a time when technological capacity is growing rapidly, we believe that true innovation in healthcare is about strengthening humans’ ability to make good decisions, not about removing the human from the decision loop. This is how we understand «math for good», and it is how we believe AI should be integrated into healthcare.




