Why MacroInsight

Why Choose Us

We are not AI tool users,
but a team of data scientists who start from the essence of the problem.

Our DNA

We see the essence of problems,
and solve the pain points.

We are a team of data scientists.
We don't just use AI tools — we do the research directly,
from data collection, model design, training to deployment.
Every step is our own technology.

The common industry approach is to compress and package large models.
We understand problem structures from scratch,
designing specialized engines that are faster and more accurate than general-purpose models.

We don't just do AI detection —
we solve the "why" and the "what's next."

Research-Driven

Original R&D, not package assembly

World's first Chinese radiology RECIST inference engine. Trained on 971 patients, 5,296 real reports — not fine-tuning someone else's model.

LLM Agent Architecture

Medical-grade LLM Agent

Why not just use GPT-4? Because patient data cannot go to the cloud. We built a local inference engine on Apple Silicon edge devices — 14 seconds for structured report analysis, accuracy >93%.

System-Level Thinking

Not just detection — building systems

Our system performs 45-second AI triage at disaster sites, sets up a smart command post in 5 minutes, and integrates seamlessly with hospital systems — zero rework. Field, edge AI, and hospital domains unified. This isn't an AI model demo; it's an end-to-end decision chain.

Multi-Domain Experience

Battle-tested across domains

Mammography quality, colorectal cancer tracking, endotracheal tube alerts, sudden cardiac death prediction, emergency response command, additive manufacturing defects, smart shipyard, coffee-tea-cocktail evaluation — every domain's experience fuels the next.

Apple × Edge AI

Running an entire hospital's AI
on a single Mac Mini.

In partnership with Dynabook and Apple Taiwan,
we deploy precision medical AI on a Mac Mini measuring just 12.7 cm square.

Built-in Neural Engine and ML accelerators
deliver sufficient compute power at low power consumption.
It even fits in a mobile mammography unit,
bringing AI to rural areas.

This is not a demo concept —
it's a system already running in hospitals.

Why Apple Silicon

Unified memory architecture eliminates the need for a discrete GPU for AI inference. MLX native framework accelerates our models directly at the hardware level. Power consumption is 1/10 of a desktop, with no compromise on inference speed.

Why Edge, Not Cloud

Medical data requires the highest level of privacy. Patient images cannot go to the cloud — this is not an option, it's a red line. Edge computing is not a compromise; it's the only correct answer.

Clinical Results

Three AI models are clinically operational at Kaohsiung Medical University Chung-Ho Memorial Hospital. Mammography quality, endotracheal tube alerts (96% accuracy), sudden cardiac death prediction — all on a single Mac Mini.

System Integration · Proven

National smart emergency response command system — built on the same Mac Mini edge architecture. AI Agent + MCP medical protocol, 5 minutes to establish a smart command post at disaster sites.

Advantages

Data stays on-premise

100% edge computing, zero cloud dependency. Privacy and security are design principles, not add-on features.

Regulation-ready

IEC 62304, ISO 14971, GDPR — compliance is built into product design, not an afterthought.

End-to-end delivery

From PoC to production, from data to UI. We deliver working systems, not just models. 100% fulfillment rate.

Explainable AI

Every determination has evidence, every inference is traceable. Meeting FDA/TFDA explainability requirements for AI medical devices.

Global partners

Dynabook, Apple Taiwan, Intel, Kudan (Japan), Kaohsiung Medical University.

22nd Taiwan National Innovation Award

A record 392 entries; awarded the Startup Prize for medical AI image quality management technology.