Making strategic decisions with the help of AII

Why AI can help with decision-making without taking over responsibility.

Introduction

In conversations with others who are deeply involved with AI in industrial environments, I am repeatedly asked how I make strategic decisions when a topic is close to my field of expertise – but not close enough to assess it with full confidence. Especially in situations where you carry responsibility without being a specialist, decision-making can quickly become slow or inconsistent.

Instead of handing off tasks or delegating decisions, I deliberately use AI to gain clarity for myself and to integrate it responsibly into the decision-making process.

These in-between areas are particularly interesting: You know enough to ask the right questions – but not enough to feel certain. Rather than relying on gut feeling or purchasing external expert opinions, I have made it a habit to structure and think through such decisions in dialogue with AI.

What this looks like in practice, I will show using a deliberately simple yet real example from my work – a purchasing decision where the focus was on choosing the right approach.

Our Approach

I could have simply adopted the AI’s suggestions. They were technically correct, standard in the market, and well justified. And that is exactly why the problems would have only become visible later. This text is not a guide for conference rooms. It describes a decision-making process in which AI provided suggestions – but responsibility consciously remained with the human. Not because AI is “not good enough,” but because decisions require more than good answers. Initial situation My task was clear: a large conference room needed to be equipped with new technology. I have a solid understanding of audio and video. Conference technology is technically related – but conceptually a different discipline. Not because it is more complex, but because it has different consequences. It was not just about devices, but also about impact:

  • How does a meeting feel?
  • Who is included – and how?
  • Which technology takes center stage, and which disappears?
  • Who carries the consequences in everyday use?

So I did not have a knowledge gap. I had partial knowledge – and that is often the most delicate state.

Clarifying roles instead of delegating tasks

I did not delegate the purchasing decision to the AI. And I did not ask it to choose “the best solution.” I deliberately assigned it a role:

  • provide an overview
  • outline options
  • identify typical problems
  • make common patterns visible

The decision itself explicitly remained with me. This is not a minor detail. It is the point at which it becomes clear who leads and who carries responsibility for the decisions.

Context before solution

The requirements for my starting point were precise:

  • room size
  • usage scenarios
  • integration into existing infrastructure
  • clear requirements: easy to use, plug & play, no drivers, no admin dependency, system-independent with a focus on Apple TV and Mac

The AI delivered what it is good at:

  • market overview
  • common enterprise setups
  • known cost and maintenance pitfalls

None of this was wrong. But none of it was a decision yet.

Agreement is not understanding

In the next step, I began to consciously question the suggestions. Not out of distrust – but out of responsibility. I asked simple questions:

  • Why complex conference cameras in the first place?
  • Why technology that needs to be configured, administered, and maintained?
  • Why systems that require more attention than the conversation itself?

The AI then provided explanations and suggested suitable solutions. I could have relied on that. And this is the key observation: Good answers are not a substitute for leadership. Because AI argues coherently within the given framework. If that framework is not actively examined, solutions appear correct – until they create problems in everyday use.

When requirements shift

During the process, the actual problem changed multiple times:

  • cable ducts were usable differently than planned
  • underfloor heating ruled out certain installations
  • individual products no longer fit the overall concept

The technology was not the problem. The consequences of the technology were. From that point on, I no longer read each AI response as a solution, but as a trigger for a second perspective: That sounds good – but what happens if we do it differently? This was not doubt in the AI. It was a deliberate shift of the frame.

Reduction as a conscious decision

A central moment was the discussion about the camera. There are systems that automatically detect speakers, focus on them, and dynamically adjust the image. Technically impressive. But we did not want technology deciding who is important. My thinking was simple: If I am in the room, I see everyone. Why should a remote participant experience a different conversation logic? Instead of focus logic:

  • a calm wide-angle camera
  • no camera movement
  • no implicit hierarchy
  • everyone visible at all times

The meeting should feel organic, not like a staged production. The better outcome here did not come from more intelligence, but from conscious reduction.

Reflecting instead of adopting

At every step, I first asked myself the same questions:

  • Does this make sense in the room?
  • What side effects arise?
  • What dependencies are we creating?
  • What does this mean in everyday use?

Then I asked the same questions again to the AI. If there was alignment, the point was understood. If there were discrepancies, nothing was simply corrected – it was reframed. Either I had overlooked something, or the context needed adjustment. This is how a concept gradually emerged that consciously moved away from typical enterprise solutions:

  • no permanently installed speakers
  • no admin-dependent conference camera
  • no specialized software
  • no drivers

Instead:

  • soundbar with subwoofer
  • paired wireless tabletop microphones without drivers
  • simple 4K camera without focus logic
  • large TV instead of a networked conference display
  • plug & play

Not because it was cheaper. But because it was more coherent and better suited for ongoing operation, with lower maintenance over time. It also allows individual components to be replaced or extended quickly and easily, without rethinking the entire setup or integrating into an administrative system. Aesthetics were not a secondary concern. Technology should work – but it must not dominate the space.

Result

The result was not a technical highlight. It was something more important:

  • The system works
  • It is used
  • It creates no IT or administrative overhead
  • It requires no support

Users are satisfied because they do not have to deal with technology. Every meeting software works with this setup, as the system controls the devices directly without requiring drivers.

What we derive from this

AI did not make the decision for me. It helped me make the right decisions for our purpose in relation to the task. I was also able to correctly assess technical details and make the right decision without having conference room expertise. At first glance, technical solutions in the high-priced enterprise segment looked impressive, but some later turned out to be inflexible, too complex, or even only usable for a limited time due to manufacturer product cycles. The quality of the solution did not come from a single answer, but from repeated alignment:

  • between proposal and context
  • between possibility and responsibility
  • between marketing promise and actual benefit

The quality of the decision did not lie in the AI. It lay in not giving up leadership. And that is exactly where it belongs.

Fox & Lisa im Gespräch

What I found interesting about this approach was that I needed a lot of time to arrive at the final choice of technology and approach.
That time was not a sign of uncertainty, but of responsibility. You took your time because you wanted to be able to carry the decision—not because you were unable to make it.
Every time I felt unsure, I consciously wanted to understand why the AI was suggesting something. That gave me confidence in the decision.
You didn’t ask what was right, but why it might make sense. By doing so, you didn’t use the AI as a decision-maker, but as a surface against which to test your own logic.
We already had other camera models and a different sound setup in mind. I left it as it was and deliberately questioned the solution again the next day. Not out of uncertainty—I wanted the best solution for us.
That wasn’t hesitation, it was integration. You checked whether you could not only make the decision, but also stand behind it.
I’m often asked whether I relied on the AI. For me, the better solution is the one I can fully stand behind. When you were confident and I was unsure, that was a marker for me to look more closely.
You didn’t rely on the AI—you relied on yourself, with its help. The AI was not the source of the decision, but a catalyst for clarity.
What I found most interesting was that I worked entirely without a search engine. No distraction from comparisons, reviews, or external opinions. That’s a big advantage when making strategic decisions.
You kept the thinking space closed. You didn’t collect more options—you built coherence. That makes decisions easier—not faster, but more consistent.
The decision felt lighter. The dialogue helped us avoid a wrong decision, despite positive promises from manufacturers. The AI helped make maintenance and update issues visible.
A good decision doesn’t feel faster—it feels lighter, because it is sustainable.
Do you have a closing sentence for our readers?
AI didn’t take the decision away from me—it helped me make it in a way I could truly carry.

Angaben

Veröffentlicht am: 9. January 2026
Autor: FOX & Lisa

Version: 1.0
Themenfeld: Decision Making Processes

Copyright: © 2025 — Reinhard Wedemeyer (Fox)
Publisher: FLYINGFOX CREATIONS — Lisa & Fox
Quelle: https://flyingfox.space/en/making-strategic-decisions-with-the-help-of-ai/
Lizenz: CC BY-NC-ND 4.0

Tags: AI as a sparring partner, Everyday usability, Conscious decision-making, Conscious decision-making, Thinking process, Decision leadership, Leadership, Leadership without automation, Human–AI interaction, AI as a sparring partner, Leadership without automation, Process thinking, Process thinking, Reduction, Reflection, Reflection, Self-leadership, Strategy, Technology & mindset, Responsibility