<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>deTrouble 滌擾</title><description>delete trouble, create value — Practical ideas on simplicity, utility, and feasibility.</description><link>https://www.detrouble.com/</link><item><title>The Thinking That AI Can&apos;t Replace</title><link>https://www.detrouble.com/en/ai-wont-think-for-you/</link><guid isPermaLink="true">https://www.detrouble.com/en/ai-wont-think-for-you/</guid><description>AI recombines known truths at scale — but doesn&apos;t create new ones. What stays scarce: perception, judgment, taste, and the willingness to be wrong. How to use AI without losing them.</description><pubDate>Tue, 21 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## &quot;Just Ask AI&quot;

*&quot;Why read books when you can just ask AI?&quot;*

I keep hearing this. As someone who uses AI every day — for coding, research, writing, analysis — I understand the appeal. AI answers questions remarkably well.

But this framing misses what reading and thinking actually do. They don&apos;t just move information from one place to another. They reshape how your mind processes the world. That reshaping — the slow change in how you see, connect, and judge — cannot be outsourced.

![A view beyond the immediate — perspective changes everything](/images/blog-horizon.jpg)

What many people want from AI is simple: skip the effort, avoid the discomfort, get the result. For routine, well-defined tasks — formatting documents, summarising reports, generating boilerplate — this works. AI handles the repetitive brilliantly.

But for anything that requires judgment, strategy, or creative decision-making, the same people end up frustrated. *&quot;This AI is useless.&quot;* The tool isn&apos;t the problem. The expectation is.

## How I Actually Use AI

My experience has been the opposite of what the headlines promise. AI hasn&apos;t replaced my thinking — it has *intensified* it. Every meaningful conversation triggers questions I hadn&apos;t considered, exposes assumptions I didn&apos;t know I had, and forces me to articulate things I&apos;d left vague.

The key difference is this: **I don&apos;t look for AI&apos;s best performance. I look for its limits.**

Most people test AI on its strengths — impressive demos — then apply it to their own problems, which don&apos;t fit, and conclude it&apos;s broken. I do the opposite. I probe where it fails, where it hallucinates, where it gives confident-sounding nonsense. Once you map the ceiling, you deploy it precisely.

### Two roles, not one

**Consultant.** AI&apos;s knowledge base is vast. It connects ideas across domains, surfaces perspectives I&apos;d need weeks of reading to find. The &quot;aha moments&quot; that used to come from months of study now happen in concentrated bursts during a single conversation.

**Operator.** I&apos;m a creator and strategist, but not an operator. I&apos;ll obsessively prototype and iterate on a new idea — but once it becomes routine, the repetition drains me faster than the creative work. That&apos;s not laziness; it&apos;s a cognitive pattern I&apos;ve had to accept. AI fills that role precisely — repetitive, reference-based, well-defined work is where it excels.

The mistake is collapsing these into one. AI is a superb operator. A useful consultant. It is not your thinker.

## Recombination Is Not Understanding

AI is extraordinarily good at playing games humans have already designed — optimising business models, navigating social systems, solving quantifiable problems. It operates within our constructs faster and more thoroughly than any human.

But it cannot perceive the world through lived experience, generate genuinely new paradigms of understanding, or feel the texture of a problem before it has a name. This isn&apos;t a limitation that more compute will solve. It&apos;s structural.

Look at how AI actually achieves its breakthroughs:

**In protein design**, [RFdiffusion](https://www.nature.com/articles/s41586-023-06415-8) (*Nature*, 2023) created proteins that don&apos;t exist in nature — structures that violated conventional rules yet folded correctly when synthesised. [ProGen](https://www.nature.com/articles/s41587-022-01618-2) generated functional enzymes with as little as 18% similarity to any natural protein. **In rocket engineering**, NASA&apos;s AI-driven topology optimisation produced combustion chamber components with organic, bone-like geometries no engineer would draw — yet they outperformed conventional designs.

These results are beyond human intuition. But not beyond physics. AI explored a design space too vast for biological cognition and found novel *combinations* within existing principles. The principles themselves — the physics, the chemistry — still came from human understanding.

**The pattern: AI doesn&apos;t create new truths. It recombines known truths at a scale we can&apos;t match.** Recombination is powerful. But it is not understanding.

![Connected networks — powerful at recombination, constrained by design](/images/blog-network.jpg)

Here&apos;s what makes this concrete: the major breakthroughs in neural networks didn&apos;t come from making models *more* general. They came from **constraining** them. Convolutional networks restrict each neuron to a small image patch. Transformers selectively attend to relevant input, ignoring most of it. The [Lottery Ticket Hypothesis](https://arxiv.org/abs/1803.03635) (Frankle &amp; Carbin, 2019) found you can prune 90%+ of connections with no loss in performance.

**Unconstrained models can represent anything but learn nothing well. Constraints that match reality create focus. Focus enables depth.**

The question that stays with me: should humans follow the same strategy? Narrow our perception until we&apos;re optimised for a single domain? Become a very efficient function?

I don&apos;t think so.

## The World Is Larger Than Your Model of It

Here&apos;s a fact that changed how I think about thinking.

[Neuroscience research](https://www.annualreviews.org/doi/10.1146/annurev.neuro.27.070203.144152) (Douglas &amp; Martin, 2004) shows that only **5-10% of synapses on a cortical neuron come from external sensory input**. The remaining 90-95% come from other neurons within the brain — internal feedback loops, predictions, associations.

The brain&apos;s resting activity — the [Default Mode Network](https://www.pnas.org/doi/10.1073/pnas.98.2.676) discovered by Marcus Raichle — consumes the majority of its energy. Externally-driven task activity adds comparatively little. [Karl Friston&apos;s Free Energy Principle](https://www.nature.com/articles/nrn2787) goes further: perception is not passive reception. It&apos;s active prediction. The brain generates reality from the inside out; sensory input serves as *correction signals* — updating the model when predictions are wrong.

What you &quot;see&quot; is mostly what your brain *predicted* you would see. External reality is a calibration tool, not the main input.

This means something profound: **most of your mental life is internal narrative, not external observation.** We live inside our own stories and use the world to edit them.

![The ocean of truth — always larger than what we&apos;ve found](/images/blog-ocean.jpg)

The gap between internal model and external reality is always there. The question is whether you know it, respect it, and actively work to close it — or let your model run unchecked.

The great minds who expanded human understanding shared one trait: **they held their own models loosely.** They treated the world with near-awe — not naively, but because they understood that reality is always more complex than any representation of it.

Newton, near the end of his life:

&gt; *&quot;I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.&quot;*

Feynman, in his 1974 [Caltech commencement address](https://calteches.library.caltech.edu/51/2/CargoCult.htm):

&gt; *&quot;The first principle is that you must not fool yourself — and you are the easiest person to fool.&quot;*

This isn&apos;t modesty. It&apos;s a cognitive strategy rooted in how the brain actually learns.

## The Small Self That Sees Further

The [Dunning-Kruger effect](https://psycnet.apa.org/record/1999-15054-002) (Kruger &amp; Dunning, 1999) shows that low-competence individuals overestimate their abilities — because recognising incompetence requires the skills they lack. The inverse matters more: high-competence individuals maintain awareness of their gaps. That awareness is what keeps them learning.

Carol Dweck&apos;s growth mindset research found something striking at the neural level: people who believe their abilities are fixed show *reduced neural processing of corrective feedback* — their brains literally absorb less information after errors ([Moser et al., 2011](https://journals.sagepub.com/doi/10.1177/0956797611419520)). People who see themselves as works-in-progress process errors more deeply. Their brains stay open to the world&apos;s corrections.

[Kaplan, Gimbel &amp; Harris (2016)](https://www.nature.com/articles/srep39589) went further: when identity-linked beliefs are challenged, the brain activates threat responses — the amygdala fires, analytical processing shuts down. **The feeling of &quot;I know&quot; triggers the same neural circuits as physical safety.** The brain resists new information the way a body resists a blow.

In Zen, this is called **Shoshin** (初心) — beginner&apos;s mind:

&gt; *&quot;In the beginner&apos;s mind there are many possibilities, but in the expert&apos;s mind there are few.&quot;*
&gt; — [Shunryu Suzuki](https://www.shambhala.com/zen-mind-beginner-s-mind-2378.html), 1970

Across neuroscience, psychology, and contemplative traditions, the pattern is consistent: **a smaller self creates a larger aperture.** When you stop filling the frame with certainty, you see more of what&apos;s actually there.

This is what I mean by respect for the world. Not reverence in a religious sense — but recognising that reality is always richer, stranger, and more intricate than your current understanding. That recognition is the engine of learning. Without it, you stop updating. You become a closed loop — 90% internal narrative, 10% reality, and a shrinking willingness to let the 10% change the 90%.

## The Danger of AI-Inflated Certainty

This is exactly what I&apos;ve watched happen as AI becomes widespread.

People use AI and become *more certain, not less*. Confidence grows. Curiosity shrinks. AI reinforces your existing frame — ask it to confirm your view and it will. The output feels like validation, and validation feels like competence.

But competence built on AI answers without underlying understanding is hollow. It works until context shifts — then collapses, because no mental model was built to adapt.

When everyone has this hollow competence — when &quot;good enough&quot; output costs nothing — markets reveal the truth. Desktop publishing, stock photography, music production, app development: barriers drop, supply explodes, unit value collapses, the top 1% captures 90%+ of value. The winners are never those who merely used the tool. They brought what the tool couldn&apos;t.

The question shifts from *what can AI do?* to ***what can I do with AI that others can&apos;t?***

That answer is never about the tool.

## What Remains Yours

If AI commoditises output, what stays scarce?

Not information — AI has more. Not speed — AI is faster. Not pattern matching — AI sees patterns across more data than any human can.

What stays scarce:

- **Perception** — noticing what matters before it has a name
- **Judgment** — knowing which of ten AI options fits *this* situation
- **Taste** — the irreducible sense of what feels true, elegant, or resonant
- **Self-awareness** — understanding your cognitive patterns well enough to deploy yourself effectively
- **Willingness to be wrong** — the prerequisite for learning anything genuinely new

These are functions of a mind that stays open, stays humble, and stays engaged with direct experience. A mind that treats the world as something to understand — not optimise. A mind that knows it is small, and sees further because of it.

AI handles the repeatable, the quantifiable, the well-defined. It surfaces knowledge you&apos;d take years to find. It operates where you&apos;d rather not.

But the thinking — the perception, the judgment, the thirst to understand things as they actually are rather than as you wish them to be — that stays yours.

In a world where everyone has the same AI, that&apos;s exactly where your value lives.

&gt; The people who understand their own smallness will use AI to see further.
&gt; The people who use AI to feel bigger will end up seeing less.

Technology should be like breathing air — invisible yet essential. But air doesn&apos;t breathe for you. (Yes, a ventilator can. But if you&apos;re on life support, you&apos;ve got bigger problems than your AI strategy.)</content:encoded></item><item><title>AI 取代不了的思考</title><link>https://www.detrouble.com/zh/ai-wont-think-for-you/</link><guid isPermaLink="true">https://www.detrouble.com/zh/ai-wont-think-for-you/</guid><description>AI 能大規模重組已知真理——但不能創造新的。什麼始終稀缺：感知、判斷、品味、以及願意犯錯的勇氣。如何用 AI 而不失去它們。</description><pubDate>Tue, 21 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## 「問 AI 就好了」

*「讀書有什麼用？問 AI 就好了。」*

這句話我不斷聽到。作為一個每天都在用 AI 的人——寫程式、做研究、撰文、分析——我理解這種想法的吸引力。AI 回答問題確實很出色。

但這個說法忽略了閱讀和思考真正在做的事。它們不只是把資訊從一個地方搬到另一個地方。它們在重塑你的大腦處理世界的方式。那個重塑——你觀看、連結和判斷方式的緩慢改變——沒辦法外包。

![超越眼前的視野——視角改變一切](/images/blog-horizon.jpg)

很多人想從 AI 得到的很簡單：跳過努力、避開不適、直接拿結果。對於常規的、定義清晰的任務——整理文件、摘要報告、生成模板——這確實能用。AI 處理重複工作很出色。

但對於需要判斷力、策略和創意決策的事情，同一批人最終只會感到挫敗。*「這個 AI 真垃圾。」* 不是工具有問題，是期望有問題。

## 我實際上怎麼用 AI

我的經驗跟新聞標題的承諾完全相反。AI 沒有取代我的思考——它**加速**了我的思考。每一次有意義的對話都觸發我沒想過的問題、暴露我不知道自己有的假設、迫使我把模糊的想法說清楚。

關鍵的差別在於：**我不看 AI 的最佳表現，我看它的限制。**

大部分人在 AI 的強項上測試——看到驚豔的示範——然後應用到自己完全不同的問題上，最後下結論說工具壞了。我反過來做：我探測它在哪裡失敗、在哪裡幻覺、在哪裡給出自信但荒謬的答案。一旦摸清天花板，你就能精準地部署它。

### 兩個角色，不是一個

**顧問。** AI 的知識庫龐大，擅長跨領域連結。它能帶出我需要幾週閱讀才能接觸到的視角。過去要靠數月研讀才有的「啟發時刻」，現在在一次對話中就能密集發生。

**操作者。** 我是創造者和制定者，但不是操作者。新想法我會瘋狂地打磨和迭代——但一旦變成常規，重複的操作比創造更消耗我。這不是懶，是一個我必須接受的認知模式。AI 精確地填補了這個角色——重複性、有參考資料的、定義清晰的工作，正是它擅長的。

錯誤是把兩個角色合成一個——期待 AI 是你的思想家。它是出色的操作者，有用的顧問，但不是你的思想家。

## 重組不等於理解

AI 非常擅長玩人類已經設計好的遊戲——優化商業模式、操作社會系統、解決可量化的問題。它在我們的建構物中運作，比任何人都快、都全面。

但它不能透過親身經歷感知世界、不能產生真正全新的理解範式、不能在問題還沒有名字之前感受它的質地。這不是更多算力能解決的，是結構性的。

看 AI 實際上是如何取得突破的：

**蛋白質設計中**，[RFdiffusion](https://www.nature.com/articles/s41586-023-06415-8)（*Nature*，2023）創造了自然界不存在的蛋白質——違反傳統規則的結構，但合成後卻能正確折疊。[ProGen](https://www.nature.com/articles/s41587-022-01618-2) 生成了與任何自然蛋白質相似度低至 18% 的功能性酶。**火箭工程中**，NASA 的 AI 拓撲優化設計了有機、骨骼狀結構的燃燒室組件——沒有工程師會畫出來——但效能超越了傳統設計。

這些結果超越了人類直覺，但沒有超越物理學。AI 探索了人類認知無法導航的設計空間，在現有原理中找到了新的*組合*。原理本身——物理、化學——仍然來自人類的理解。

**模式是：AI 不創造新的真理。它在我們無法匹敵的規模上重新組合已知的真理。** 重組很強大，但它不是理解。

![連結的網絡——擅長重組，受設計限制](/images/blog-network.jpg)

讓這個更具體：神經網路的重大突破不是來自讓模型更通用，而是來自**限制**它們。卷積網路限制每個神經元只看一小塊圖像。Transformer 選擇性地聚焦，忽略大部分輸入。[彩票假說](https://arxiv.org/abs/1803.03635)（Frankle &amp; Carbin, 2019）發現剪掉 90% 以上的連接，表現不變。

**不受限制的模型可以表示任何東西，但學不好任何東西。匹配現實的限制創造聚焦。聚焦成就深度。**

一直跟著我的問題是：人類是否應該遵循同樣的策略？收窄感知，直到被優化為一個單一領域？變成一個非常高效的函數？

我不這麼認為。

## 世界比你的模型更大

這是一個改變了我對思考看法的事實。

[神經科學研究](https://www.annualreviews.org/doi/10.1146/annurev.neuro.27.070203.144152)（Douglas &amp; Martin, 2004）表明，皮質神經元上只有 **5-10% 的突觸來自外部感官輸入**。其餘 90-95% 來自大腦內部——反饋迴路、預測、聯想。

大腦的靜息活動——Marcus Raichle 發現的[預設模式網絡](https://www.pnas.org/doi/10.1073/pnas.98.2.676)——消耗了大部分能量。外部驅動的任務活動只增加了相對少的。[Karl Friston 的自由能原理](https://www.nature.com/articles/nrn2787)走得更遠：感知不是被動接收，是主動預測。大腦從內部生成現實模型，感官輸入主要作為*修正信號*——預測錯了才更新。

你「看到」的東西大部分是大腦*預測*你會看到的。外部現實是校準工具，不是主要輸入。

這意味著：**你的大部分心理生活是內部敘事，而非外部觀察。** 我們活在自己的故事裡，用世界來編輯它們。

![真理的海洋——永遠比我們已發現的更大](/images/blog-ocean.jpg)

內部模型和外部現實之間的落差永遠存在。問題是你是否知道它在那裡、尊重它、並主動地縮小它——還是讓你的模型不受檢驗地運轉。

真正拓展人類理解的偉大頭腦有一個共同特質：**他們對自己的模型保持鬆弛。** 他們以近乎敬畏的態度對待世界——不是天真，而是理解現實永遠比任何表述都更複雜。

牛頓，臨終前：

&gt; *「我不知道世界會如何看我，但對我自己來說，我似乎只是一個在海邊玩耍的男孩，偶爾找到一顆更光滑的鵝卵石或更漂亮的貝殼，而真理的汪洋大海就未被發現地躺在我面前。」*

費曼，在 1974 年[加州理工畢業演講](https://calteches.library.caltech.edu/51/2/CargoCult.htm)中：

&gt; *「第一原則是你不能欺騙自己——而你是最容易被自己欺騙的人。」*

這不是謙虛。這是根植於大腦實際學習方式的認知策略。

## 渺小的自我，看得更遠

[Dunning-Kruger 效應](https://psycnet.apa.org/record/1999-15054-002)（Kruger &amp; Dunning, 1999）表明，低能力者系統性地高估自己——因為識別無能本身需要他們所缺乏的技能。但反面更重要：高能力者保持對差距的覺察。那份覺察讓他們持續學習。

Carol Dweck 的 growth mindset 研究在神經層面發現了驚人的事：認為能力固定的人對修正反饋的神經處理降低——犯錯後，大腦字面上吸收更少的資訊（[Moser et al., 2011](https://journals.sagepub.com/doi/10.1177/0956797611419520)）。把自己看作未完成品的人，處理錯誤更深入。他們的大腦對世界的修正保持開放。

[Kaplan, Gimbel &amp; Harris (2016)](https://www.nature.com/articles/srep39589) 走得更遠：當與身份綁定的信念被挑戰，大腦啟動威脅反應——杏仁核活化，分析處理關閉。**「我知道」的感覺觸發了與身體安全相同的神經迴路。** 大腦抵抗新資訊就像身體抵抗打擊。

在禪宗中，這叫**初心**（Shoshin）：

&gt; *「初學者的心中有很多可能性，但專家的心中很少。」*
&gt; — [鈴木俊隆](https://www.shambhala.com/zen-mind-beginner-s-mind-2378.html)，1970

從神經科學、心理學到沈思傳統，一致的模式：**更小的自我創造更大的開口。** 當你不再用確定性填滿畫面，就能看到更多實際存在的東西。

這就是我說的對世界的尊重。不是宗教的崇敬——而是承認現實永遠比你當前的理解更豐富、更奇異、更精密。這個承認是學習的引擎。沒有它，你停止更新。你變成一個封閉迴路——90% 內部敘事，10% 現實，以及越來越小的意願讓那 10% 改變那 90%。

## AI 膨脹的確定性

這正是我在 AI 普及後觀察到的。

人們使用 AI 後，變得*更確定，而不是更不確定*。自信增長，好奇心萎縮。AI 強化你現有的框架——叫它確認你的觀點，它會。輸出感覺像驗證，驗證感覺像能力。

但建立在 AI 答案上而沒有底層理解的能力是空的。語境不變時有效——語境一變就崩塌，因為從未建構過能適應的心智模型。

當每個人都有這種空心能力——當「還不錯」的門檻為零——市場會說實話。桌面出版、圖庫攝影、音樂製作、App 開發：門檻降低，供給爆炸，單位價值崩潰，前 1% 拿走 90% 以上的價值。贏家從來不是僅僅使用工具的人，而是帶來了工具無法提供的東西的人。

問題從*AI 能做什麼*變成 ***我能用 AI 做什麼，而其他人不能？***

答案永遠不在工具上。

## 你的部分

如果 AI 將產出商品化，什麼保持稀缺？

不是資訊——AI 比你多。不是速度——AI 更快。不是模式匹配——AI 在更多數據中看到更多模式。

保持稀缺的是：

- **感知**——在事物還沒有名字之前就注意到什麼重要
- **判斷**——知道十個 AI 選項中哪一個適合*這個*情境
- **品味**——對什麼感覺真實、優雅或共鳴的不可還原的感覺
- **自我覺察**——足夠了解自己的認知模式，以便有效地部署自己
- **願意犯錯**——學到任何真正新東西的前提

這些是一個保持開放、保持謙卑、持續與直接經驗接觸的心智的功能。一個把世界當作需要理解、而非需要優化的對象的心智。一個知道自己渺小——並因此看得更遠的心智。

AI 處理可重複的、可量化的、定義清晰的事情。它帶出你需要數年才能找到的知識。它在你不想操作的地方操作。

但思考——感知、判斷、渴望按事物的本來面目去理解而非按你希望的樣子——那是你的。

在一個每個人都能使用同樣 AI 的世界裡，那正是你的價值所在。

&gt; 理解自己渺小的人，會用 AI 看得更遠。
&gt; 用 AI 讓自己感覺更大的人，最終會看得更少。

科技應該像呼吸空氣一樣——察覺不到卻不可或缺。但空氣不會替你呼吸。（對，呼吸機可以。但如果你已經需要靠機器維生，你的問題大概不是 AI 策略了。）</content:encoded></item><item><title>Platforms Got AI Backwards</title><link>https://www.detrouble.com/en/platforms-got-ai-backwards/</link><guid isPermaLink="true">https://www.detrouble.com/en/platforms-got-ai-backwards/</guid><description>Platform AI is a self-checkout machine — cost-shifting disguised as innovation. The future is open doors: MCP, A2A protocols, and your AI agent talking to theirs.</description><pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## Sell Shovels, Not Gold

During the California Gold Rush of 1848, most people scrambled to mine gold. [Samuel Brannan](https://en.wikipedia.org/wiki/Samuel_Brannan) did something different. He bought every mining pan in the region at $0.20 each and sold them for $15. Nine weeks, $36,000, California&apos;s first millionaire. Not by mining. By enabling miners.

The AI gold rush is following the same pattern. And most platforms are making the miner&apos;s mistake.

## Everyone Rushed to Mine

[Notion](https://notion.so) added Notion AI. [Canva](https://canva.com) built Magic Studio — twelve AI tools. [Adobe](https://adobe.com) embedded Firefly into Photoshop. [Microsoft](https://microsoft.com) put Copilot in every Office app. [Zoom](https://zoom.us) added AI meeting summaries. [Slack](https://slack.com) added AI channel digests.

The pitch was always the same: &quot;Now with AI.&quot; Every platform rushed to put AI inside their walls, hoping it would keep users locked in — and paying more.

The results? Generic content. Summaries that miss what matters. Designs that look like everyone else&apos;s. You can&apos;t choose a different model, can&apos;t customise the prompts, can&apos;t control the output. You get what they give you.

This is the self-checkout machine of AI.

## The Self-Checkout Problem

![The self-service trap](/images/blog-self-checkout.jpg)

Think about self-checkout at a supermarket. QR code ordering at restaurants. Airport check-in kiosks. These are sold as &quot;smart technology.&quot; They&apos;re the opposite — **cost-shifting disguised as innovation.** The work that trained staff used to do is now pushed onto you. Every confusing interface, every &quot;please try again&quot; is friction being created, not eliminated.

Platform AI features follow the same logic. &quot;Here, use our AI tool&quot; shifts the work of getting good results onto you. Learn our prompts. Accept our limitations. Pay our premium. It&apos;s a self-checkout machine with an AI label.

What I actually want is the opposite. I want *my* AI — one that knows my preferences, my style, my standards — to operate across all these services. I don&apos;t want Notion&apos;s AI writing for me. **I want my AI writing in Notion.**

## Build Doors, Not Walls

For that to work, platforms don&apos;t need AI features. They need **doors** — APIs, protocols, interfaces that let external AI agents interact with their systems.

![Platform AI (locked) vs Open API (doors)](/images/walled-vs-open.svg)

[Shopify](https://shopify.com) gets this. They&apos;ve built [MCP servers](https://shopify.dev/docs/agents), agentic storefronts, and a Universal Checkout Protocol. AI agents from [ChatGPT](https://chatgpt.com), [Perplexity](https://perplexity.ai), and [Google AI](https://ai.google) can browse products, complete purchases, and interact with merchants — all through open protocols. No proprietary AI feature. Just doors.

Shopify didn&apos;t build a better AI. They built better access for *your* AI. That&apos;s selling shovels.

And here&apos;s what most platforms haven&apos;t grasped: **I can build their features myself.** With [vibe coding](/en/choosing-vibe-coding-tools) I can build a writing assistant, a design generator, an analytics dashboard. Code isn&apos;t the moat anymore. But data, integrations, ecosystem — those are worth paying for. If I can access them through my own tools.

The building blocks are already emerging. [Anthropic](https://anthropic.com)&apos;s [MCP](https://modelcontextprotocol.io) (Model Context Protocol) is becoming the USB port for AI — a standard interface that lets any model interact with any tool. Shopify, [Notion](https://notion.so), and [Block](https://block.xyz) are already supporting it. [Google](https://google.com)&apos;s [A2A](https://google.github.io/A2A/) (Agent-to-Agent) protocol tackles the next layer — how agents from different companies discover each other, exchange capabilities, and coordinate tasks.

This matters because most AI frameworks today solve the wrong problem. Tools like LangGraph and CrewAI help developers *orchestrate* agents. That&apos;s useful. But the real challenge isn&apos;t combining your own agents — it&apos;s enabling agents from different organisations to talk to each other. That requires **protocols**, not frameworks. Whether MCP and A2A become the universal standard — like USB did — or fracture into competing specs, is still unclear. But the direction is right: open protocols over proprietary features.

## The Concierge Future

![A specialist who serves you](/images/blog-service.jpg)

But even APIs aren&apos;t the end of the story. APIs are like a self-service shop with well-organised shelves — better than a locked warehouse, but you still have to find things yourself.

The real end state is a concierge.

![You → Your Agent → Platform Agents → Your Result](/images/agent-to-agent.svg)

I believe everyone will eventually have a personal AI agent — one that knows your habits, preferences, and standards. When you need something done, you tell your agent. It doesn&apos;t learn every platform&apos;s API. It talks to the platform&apos;s agent.

Like a hotel concierge. You don&apos;t study restaurant menus, call the taxi company, or negotiate with the theatre. You tell the concierge what you want. The concierge knows who to call and how to get it done.

This is how human society already works. We don&apos;t do everything ourselves. We find the right person for each task and communicate intent. AI should work the same way — your generalist agent talks to specialist agents, each an expert in their own domain.

For this to work, platforms need to stop building AI *for* users and start building AI *about* their own systems — specialist agents that truly understand every feature, every edge case, every integration. My agent talks to their agent. The result is what I intended. No learning curve. No self-checkout.

## The Real Question

[Steve Jobs](https://en.wikipedia.org/wiki/Steve_Jobs) said: *&quot;Simple can be harder than complex. You have to work hard to get your thinking clean to make it simple.&quot;* Apple&apos;s products never asked users to learn technology. They made technology disappear into the task.

That&apos;s the standard. A simple question reveals whether a platform meets it: **is the AI serving the user, or serving the platform?**

If the AI only works inside the platform, requires a premium subscription, can&apos;t be replaced with your own model, and has no API for external agents — it&apos;s serving the platform. It&apos;s a retention tool with an AI label.

If the platform opens its doors and lets your AI interact on your terms — that&apos;s the one I&apos;ll pay for. Not for their AI. For their transparency.

The platforms that understand this will build doors. The rest will keep mining while someone else sells the shovels.

---

*Read next: [You Need the Box Before You Can Think Outside It](/en/when-ai-coding-makes-things-worse) | [How to Choose a Vibe Coding Tool](/en/choosing-vibe-coding-tools)*</content:encoded></item><item><title>You Need the Box Before You Can Think Outside It</title><link>https://www.detrouble.com/en/when-ai-coding-makes-things-worse/</link><guid isPermaLink="true">https://www.detrouble.com/en/when-ai-coding-makes-things-worse/</guid><description>A swing trading system rebuilt with AI looked right but was wrong at every layer. The missing step: collision — the deep discussion that transfers implicit intuition into shared understanding.</description><pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## Same Person, Same Ideas, Worse Results

I have a swing trading analysis system. It detects stock contraction patterns, scores them through a neural network, and helps me decide what to trade. I originally built it with [Gemini](https://gemini.google.com) for discussion and [GitHub Copilot](https://github.com/features/copilot) for implementation.

Then I lost the project files. So I rebuilt it with [Claude Code](https://claude.ai/code). Same person. Same trading philosophy. Same detection logic. Same neural network architecture. The only variable was the tool.

**The new version was fundamentally wrong at every layer.**

![A jigsaw puzzle with mismatched pieces](/images/blog-puzzle.jpg)

Claude Code did everything I asked. I used its planning mode. I told it to research the domain. I described my philosophy in detail. It planned, built, tested, and told me everything looked good. Every layer was wrong.

Not wrong in a way that crashes. Wrong in a way that **looks right**. The contraction detection found patterns — just not the ones I meant. The labels categorised trades — just not the way I think about them. The backtest produced numbers — just not against the scenarios that matter.

The git history: **29 commits in 4 days. 12 were rewrites or fixes.** I spent 95% of my time not building, but verifying — layer by layer, checking whether the output matched my intent.

When I found fragments of my old code and asked Claude Code to compare, the response was damning. Feature by feature, layer by layer, it walked through both versions — and for every single component, the old design was either more complete, more considered, or accounted for something the new version hadn&apos;t even thought about.

## The Missing Collision

The obvious reaction is &quot;you should have discussed more.&quot; I did. Claude Code has planning mode. I used it. I asked it to research the domain. None of that fixed the problem. Because the problem isn&apos;t whether the tool *plans*. It&apos;s what the tool *thinks planning means*.

![Ideas colliding, not just being exchanged](/images/blog-discussion.jpg)

With Gemini, the conversation was a collision. I&apos;d propose a detection method. Gemini would analyse it. I&apos;d push back — &quot;what happens when a stock gaps down mid-contraction? What about low-float names that compress differently? What if the sector is rotating out?&quot; — and propose solutions for each. Gemini would stress-test my solutions, find holes I hadn&apos;t seen. I&apos;d refine, re-propose, argue back. Sometimes for hours on a single component, until neither of us could break it.

That collision forced me to externalise intuition I didn&apos;t even know was implicit. Each argument transferred another piece of the puzzle from my head into the shared understanding. By the time I started coding, the AI had enough of my pieces to build *my* picture.

Claude Code&apos;s planning is different. It listens, organises, confirms, executes. Excellent at turning clear requirements into working code. But it doesn&apos;t challenge the assumptions behind the requirements. It trusts what you said, plans around it, and builds. The gap: **understanding what you said versus understanding what you meant.**

For a login page, what you say and what you mean are the same thing. For a trading system — where &quot;correct&quot; means it reflects 15 years of intuition about market behaviour — the gap is everything.

## The Confidence Trap

There&apos;s a deeper problem. The tool validates its own work. It tests, analyses the output, and tells you it&apos;s correct. When the backtest shows a 65% win rate, it says &quot;results look reasonable.&quot;

That confidence costs you. You trust the output, move forward, invest time — and discover the problem only when you verify manually, layer by layer. With my old workflow, verification was minimal. The design collision was so thorough that the implementation naturally reflected my intent. I didn&apos;t need to check because the thinking had already been done.

I don&apos;t care if the trading system makes money — that depends on markets and execution. I care about **fidelity**: does the output match what I envisioned? A tool that builds the wrong thing perfectly is worse than a tool that builds the right thing roughly.

![Implement first vs Think first — the workflow that matters](/images/think-first-build-later.svg)

## The Romance of Creation

People love saying &quot;think outside the box.&quot; But they forget something: **you need a box first.**

The box is your understanding. Your domain knowledge. Your years of watching what works and what doesn&apos;t. Without the box, there&apos;s no &quot;outside&quot; — there&apos;s just randomness. Some randomness is good — unexpected connections and happy accidents are gifts of working with AI. But when *everything* is random, when the output has no anchor to your understanding, it stops being creation. It becomes noise that happens to compile.

![The thinking before the building](/images/blog-blueprint.jpg)

[Nikola Tesla](https://en.wikipedia.org/wiki/My_Inventions:_The_Autobiography_of_Nikola_Tesla) understood this. In 1919 he wrote:

&gt; *&quot;When I get an idea, I start at once building it up in my imagination. I change the construction, make improvements and operate the device in my mind. It is absolutely immaterial to me whether I run my turbine in thought or test it in my shop.&quot;*

He would construct, test, and perfect an invention entirely in his mind — only building the physical machine once the mental blueprint was flawless. Twenty years without exception.

That&apos;s the part AI coding tools skip. They jump straight to the machine. And the machine runs — but it&apos;s not *your* machine. It&apos;s built from the pieces the AI had, not the pieces you carry in your head.

The blueprint isn&apos;t the description you give the AI. It&apos;s the complete mental model — tested, pressure-tested, refined through collision — that lives in your head before you describe anything. And building that model takes the slow, messy, uncomfortable work of deep thinking that no planning mode can automate.

The romance of creation isn&apos;t in watching the machine run. It&apos;s in the journey of thought that made the machine inevitable.

Sometimes the most productive thing you can do with an AI coding tool is refuse to write code — and take that journey first.

---

*Read next: [How to Choose a Vibe Coding Tool](/en/choosing-vibe-coding-tools) | [Why Your Next Website Should Be AI-Native](/en/why-build-your-own-site)*</content:encoded></item><item><title>平台把 AI 做反了</title><link>https://www.detrouble.com/zh/platforms-got-ai-backwards/</link><guid isPermaLink="true">https://www.detrouble.com/zh/platforms-got-ai-backwards/</guid><description>平台 AI 是自助結帳機——用創新包裝的成本轉嫁。未來是打開門：MCP、A2A 協議，讓你的 AI agent 跟它們的對話。</description><pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## 賣鏟子，不是挖金

1848 年加州淘金熱，大部分人搶著挖金。[Samuel Brannan](https://en.wikipedia.org/wiki/Samuel_Brannan) 做了不同的事。他用 $0.20 買下整個地區的淘金盤，以 $15 賣出。九週，$36,000，加州第一個百萬富翁。不靠挖金，靠賦能挖金的人。

AI 淘金熱正在重演同樣的劇本。而大部分平台正在犯礦工的錯誤。

## 所有人急著挖礦

[Notion](https://notion.so) 加了 Notion AI。[Canva](https://canva.com) 建了 Magic Studio——十二個 AI 工具。[Adobe](https://adobe.com) 把 Firefly 嵌入 Photoshop。[Microsoft](https://microsoft.com) 把 Copilot 塞進每個 Office 應用。[Zoom](https://zoom.us) 加了 AI 會議摘要。[Slack](https://slack.com) 加了 AI 頻道整理。

話術永遠一樣：「現在支援 AI。」每個平台急著把 AI 裝進圍牆裡，想讓用戶留下——然後付更多錢。

結果呢？通用的內容。抓不到重點的摘要。跟別人一模一樣的設計。你不能選模型、不能自訂 prompt、不能控制輸出。它給什麼你用什麼。

這就是 AI 版的自助結帳機。

## 自助結帳的問題

![自助服務的陷阱](/images/blog-self-checkout.jpg)

想想超市的自助結帳。餐廳的 QR code 點餐。機場的自助 check-in。這些被包裝成「智慧科技」。它們恰恰相反——**偽裝成創新的成本轉嫁。** 原本由受過訓練的員工做的事，現在推給你。每個令人困惑的介面、每次「請重試」，都是在製造摩擦，不是消除。

平台 AI 功能的邏輯一樣。「來，用我們的 AI 工具」把獲得好結果的工作轉嫁給你。學我們的 prompt、接受我們的限制、付我們的溢價。這就是貼了 AI 標籤的自助結帳機。

我真正要的是相反的。我要*我的* AI——了解我的偏好、風格、標準的——去操作所有這些服務。我不要 Notion 的 AI 幫我寫東西。**我要我的 AI 在 Notion 裡寫東西。**

## 建門，不是築牆

要做到這點，平台不需要 AI 功能。需要**門**——API、協議、介面，讓外部 AI agent 跟它們的系統互動。

![平台 AI（鎖住）vs 開放 API（門）](/images/walled-vs-open.svg)

[Shopify](https://shopify.com) 懂這個道理。它們建了 [MCP servers](https://shopify.dev/docs/agents)、agentic storefronts、通用結帳協議。來自 [ChatGPT](https://chatgpt.com)、[Perplexity](https://perplexity.ai)、[Google AI](https://ai.google) 的 AI agent 可以瀏覽產品、完成購買、跟商家互動——全通過開放協議。沒有專有 AI 功能，只有門。

Shopify 沒有建更好的 AI。它建了更好的通道給*你的* AI。這就是賣鏟子。

而大部分平台還沒意識到：**它們的功能我自己能建。** 用 [vibe coding](/zh/choosing-vibe-coding-tools)，我能建寫作助手、設計生成器、分析 dashboard。程式碼不是護城河。但數據、整合、生態系——值得付費。前提是我能用自己的工具對接。

基礎元件已經在成形。[Anthropic](https://anthropic.com) 的 [MCP](https://modelcontextprotocol.io)（Model Context Protocol）正在成為 AI 的 USB 接口——一個標準介面，讓任何模型跟任何工具互動。Shopify、[Notion](https://notion.so)、[Block](https://block.xyz) 已經在支援。[Google](https://google.com) 的 [A2A](https://google.github.io/A2A/)（Agent-to-Agent）協議解決的是下一層——不同公司的 agent 如何發現彼此、交換能力、協調任務。

這很重要，因為目前大部分 AI 框架在解決錯誤的問題。LangGraph、CrewAI 這些工具幫開發者*組合* agent，這有用。但真正的挑戰不是組合你自己的 agent——是讓不同組織的 agent 互相對話。這需要的是**協議**，不是框架。MCP 和 A2A 會不會像 USB 一樣一統江湖，還是分裂成互相競爭的規格，現在還看不出來。但方向是對的：開放協議優於專有功能。

## 管家的未來

![為你服務的專才](/images/blog-service.jpg)

但 API 也不是終點。API 就像一個貨架整齊的自助商店——比鎖起來的倉庫好，但你還是得自己找東西。

真正的終點是管家。

![你 → 你的 Agent → 平台 Agents → 你的結果](/images/agent-to-agent.svg)

我相信未來每個人都會有一個個人 AI agent——了解你的習慣、偏好和標準。你要做什麼，告訴你的 agent。它不需要學每個平台的 API，它跟平台的 agent 對話。

就像酒店管家。你不用研究餐廳菜單、叫計程車、跟劇院議價。你告訴管家你想要什麼。管家知道找誰、怎麼辦。

這就是人類社會的運作方式。我們不會什麼都自己做。找對的人做對的事，溝通意圖。AI 應該一樣——你的通才 agent 跟專才 agent 對話，每個都是各自領域的專家。

要做到這點，平台要停止為用戶建 AI，開始建*關於自己系統的* AI——真正了解每個功能、每個邊界情況、每個整合的專才 agent。我的 agent 跟它的 agent 對話。結果就是我要的。沒有學習曲線，沒有自助結帳。

## 真正的問題

[Steve Jobs](https://en.wikipedia.org/wiki/Steve_Jobs) 說：*「簡單比複雜更難。你必須努力讓思考清晰，才能使它簡單。」* Apple 的產品從不要求用戶學習科技。它們讓科技消融在任務中。

這就是標準。一個簡單的問題能看清一個平台是否達到：**AI 在服務用戶，還是在服務平台？**

如果 AI 只在平台內運作、需要高級訂閱、不能換模型、沒有外部 agent 的 API——它在服務平台。這是一個貼了 AI 標籤的留存工具。

如果平台打開了門，讓你的 AI 按你的方式互動——那是我願意付費的。不是因為它的 AI，是因為它的透明度。

懂的平台會建門。其餘的會繼續挖礦，而別人在賣鏟子。

---

*延伸閱讀：[先有框架，才能跳出框架](/zh/when-ai-coding-makes-things-worse) | [如何選擇 Vibe Coding 工具](/zh/choosing-vibe-coding-tools)*</content:encoded></item><item><title>先有框架，才能跳出框架</title><link>https://www.detrouble.com/zh/when-ai-coding-makes-things-worse/</link><guid isPermaLink="true">https://www.detrouble.com/zh/when-ai-coding-makes-things-worse/</guid><description>一個用 AI 重建的交易系統，看起來對但每一層都錯。缺少的步驟：碰撞——將隱性直覺轉化為共識的深度討論。</description><pubDate>Fri, 17 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## 同一個人、同一套想法、更差的結果

我有一個 swing trading 分析系統——偵測股票收縮型態，通過神經網路評分，輔助交易決策。最初用 [Gemini](https://gemini.google.com) 討論設計、[GitHub Copilot](https://github.com/features/copilot) 實現。

後來專案檔案丟了。用 [Claude Code](https://claude.ai/code) 從零重建。同一個人、同一套交易哲學、同一個偵測邏輯、同一個神經網路架構。唯一的變數是工具。

**新版本在每一層都是根本性的錯誤。**

![拼圖碎片不匹配](/images/blog-puzzle.jpg)

Claude Code 做了我要求的每一件事。我用了 planning mode，叫它研究這個領域，詳細描述了我的哲學。它規劃、建造、測試、告訴我一切沒問題。每一層都是錯的。

不是會崩潰的那種錯。是**看起來對的那種錯**。收縮偵測找到了型態——只是不是我說的那些。標籤分類了交易——只是不是我思考的方式。回測產出了數字——只是不是在真正重要的場景下。

Git 歷史：**4 天 29 次 commit，12 次是重寫或修正。** 我 95% 的時間不是在建造，是在驗證——逐層檢查輸出是否符合我的意圖。

我找到舊程式碼片段，請 Claude Code 比較。結果令人難堪。它逐個功能、逐層走了一遍兩個版本——每一個元件，舊設計要麼更完整，要麼考慮得更周到，要麼處理了新版本根本不知道存在的場景。

## 缺失的碰撞

顯而易見的反應是「你應該多討論」。我做了。Claude Code 有 planning mode，我用了。我也叫它研究這個領域。都沒有解決問題。因為問題不在工具有沒有*規劃*，在於工具認為*規劃是什麼意思*。

![想法在碰撞，不只是在交換](/images/blog-discussion.jpg)

跟 Gemini 討論時，對話是一場碰撞。我提出偵測方法，Gemini 分析。我反駁——「如果收縮中途跳空下跌怎麼辦？小盤股的壓縮方式不同呢？如果板塊正在輪動出場呢？」——同時針對每種情況提出解決方案。Gemini 壓力測試我的方案，找到我沒看見的漏洞。我修正、重新提案、繼續爭論。有時候一個組件就來回好幾個小時，直到雙方都找不到可以打破的地方。

那場碰撞迫使我把自己都不知道是隱性的直覺外化了。每次爭論都把又一片拼圖從我腦中轉移到共享的理解裡。到動手建造時，AI 有了足夠的碎片來拼出*我的*圖。

Claude Code 的規劃不同。它聽、它組織、它確認、它執行。非常擅長把清晰的需求轉成可運作的程式碼。但它不會挑戰需求背後的假設。它信任你說的，圍繞它規劃，然後建造。落差：**理解你說了什麼，和理解你是什麼意思。**

建一個登入頁面，你說的和你意思的是同一件事。建一個交易系統——「正確」意味著反映 15 年對市場行為的直覺——那個落差就是一切。

## 信心陷阱

還有一個更深的問題。工具會驗證自己的成果。它測試、分析輸出、告訴你是正確的。回測出 65% 勝率，它說「結果看起來合理」。

那份信心讓你付出代價。你信任輸出、繼續推進、投入時間——直到你手動逐層驗證時才發現問題。用舊的流程，驗證幾乎不需要。設計碰撞夠徹底，實現自然就反映了我的意圖。我不需要檢查，因為思考已經完成了。

我不在意交易系統最終是否賺錢——那取決於市場和執行。我在意的是**保真度**：輸出是否符合我的想像？一個完美建造了錯誤東西的工具，比一個粗略建造了正確東西的工具更糟。

![先實現 vs 先思考——真正重要的工作流程](/images/think-first-build-later.svg)

## 創造的浪漫

人們愛說「think outside the box」。但他們忘了：**你首先需要有 box。**

這個 box 是你的理解、你的領域知識、你多年觀察的累積。沒有 box，就沒有 outside——只有隨機。某些隨機是好的——意想不到的連結和意外的驚喜是跟 AI 合作的禮物。但當所有東西都是隨機的，當輸出跟你的理解毫無錨點，它就不再是創造。它是恰好能編譯的噪音。

![建造之前的思考](/images/blog-blueprint.jpg)

[尼古拉·特斯拉](https://en.wikipedia.org/wiki/My_Inventions:_The_Autobiography_of_Nikola_Tesla)理解這一點。他在 1919 年寫道：

&gt; *「當我有了一個想法，我會立即在想像中建造它。我修改結構、做出改進、在腦中操作這個裝置。對我來說，在思想中運轉渦輪機還是在工坊中測試，完全沒有區別。」*

他在腦中建造、測試、完善——只有當心中的藍圖完美無缺時，才會製造實體的機器。二十年，從未有過例外。

這正是 AI coding 工具跳過的部分。它們直接跳到機器。機器能跑——但那不是*你的*機器。那是用 AI 手上的碎片拼的，不是用你腦中的碎片。

藍圖不是你給 AI 的描述。是那個完整的心智模型——經過測試、壓力測試、通過碰撞而精煉——在你描述任何東西之前就存在於你腦中的。建造它需要深度思考那種緩慢的、混亂的、不舒服的工作，沒有 planning mode 能替代。

創造的浪漫不在於看機器運轉。在於那段讓機器成為必然的思想之旅。

有時候，用 AI coding 工具最有生產力的事，是拒絕寫程式碼——先踏上那段旅程。

---

*延伸閱讀：[如何選擇 Vibe Coding 工具](/zh/choosing-vibe-coding-tools) | [為什麼你的下一個網站應該是 AI 原生的](/zh/why-build-your-own-site)*</content:encoded></item><item><title>Why Your Next Website Should Be AI-Native</title><link>https://www.detrouble.com/en/why-build-your-own-site/</link><guid isPermaLink="true">https://www.detrouble.com/en/why-build-your-own-site/</guid><description>Platforms are black boxes AI can&apos;t see into. An AI-native site is transparent — content, design, and features are all readable and modifiable by AI. That transparency compounds over time.</description><pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## The Wrong Question

Most people choosing between platforms and custom sites ask: *which is easier to set up?*

That was the right question five years ago. It&apos;s the wrong question now.

The right question in 2026 is: **how well does your site work with AI?**

Not &quot;does it have an AI feature.&quot; But: can AI read your entire site, understand its structure, modify its design, add new features, create content from your actual work, and deploy changes — all through conversation?

If your site is on [Squarespace](https://squarespace.com) or [WordPress](https://wordpress.com), the answer is no. Your content is behind a proprietary editor. Your design is locked in a template. AI can help you *write* a blog post, but it can&apos;t publish it, style it, or build a payment system around it. You&apos;re still the middleman, copying and pasting between tools.

## What AI-Native Actually Means

This site — [deTrouble](https://www.detrouble.com) — is what I&apos;d call AI-native. Not because it uses AI features. Because **every layer is transparent to AI**.

![Platform vs AI-Native: the fundamental difference](/images/platform-vs-ai-native.svg)

The entire site is plain files. Content is Markdown. Design is CSS. Configuration is one file. Deployment is `git push`. No database. No admin panel. No proprietary format. [Claude Code](https://claude.ai/code) — or any AI coding tool — can read and modify all of it.

That&apos;s not a technical detail. It&apos;s the entire point.

## Content That Writes Itself (Almost)

Here&apos;s the workflow that made me stop considering platforms entirely.

I do R&amp;D and development using Claude Code. During that process, it naturally accumulates context: project code, planning documents, conversation history, decisions made along the way. When it&apos;s time to write about what I built, I don&apos;t start from scratch. I say:

&gt; *&quot;Analyse this project, our conversation, and the decisions we made. Write an article about it.&quot;*

Claude Code already has the context. It produces a draft — structured, fact-checked, in both English and Chinese. I review, give feedback, iterate. The article you&apos;re reading was created exactly this way.

![Content creation workflow: R&amp;D → &quot;Write about it&quot; → Review → git push → Live](/images/content-workflow.svg)

Then: `git push`. Live on the site. No CMS login. No manual translation. **The content creation pipeline is the same as the development pipeline.** That&apos;s only possible because AI can see everything.

On a platform, you ask ChatGPT to write something, then manually paste it into the editor, manually format it, manually translate it, and manually hit publish. Every step is a disconnection. Every disconnection is friction.

## Features Through Conversation

Content is just the beginning. When you need new capabilities on a platform, you search for a plugin — maybe it exists, maybe it costs $20/month, maybe it almost does what you want.

On an AI-native site, you have a conversation. &quot;Add [Stripe](https://stripe.com) checkout&quot; becomes a [Cloudflare Worker](https://workers.cloudflare.com) handling webhooks. &quot;Add analytics&quot; becomes one component. &quot;Add a login system across all my subdomains&quot; becomes an auth Worker with [D1](https://developers.cloudflare.com/d1/) and shared cookies.

**The pattern is always the same:** describe what you need → AI implements it → `git push` → live. No marketplace. No plugin compatibility issues. The constraint isn&apos;t what the platform allows. The constraint is what you can describe.

## The Compounding Advantage

![Freedom from platform constraints](/images/blog-freedom.jpg)

Here&apos;s what most comparisons miss: this advantage **compounds over time**.

Month 1, your custom site and a Squarespace site look similar. But by month 6, your site has evolved through dozens of conversations — new features, refined design, content generated from your actual work. The Squarespace site looks the same, or you&apos;ve spent hours fighting the template to make small changes.

Every conversation with Claude Code adds to its understanding of your project. The [memory system](https://claude.ai/code) retains your preferences, your architecture decisions, your content style. The next change is faster than the last.

A platform doesn&apos;t know your project. It doesn&apos;t learn from your decisions. It&apos;s the same template engine on day 300 as it was on day 1.

## When This Doesn&apos;t Apply

Honesty matters. This approach isn&apos;t for everyone.

It&apos;s best for people who can clearly describe what they want, who create content regularly, and who want a site that evolves. You don&apos;t need to code — but you need to think clearly about what you&apos;re building.

It&apos;s not ideal if you need [Shopify](https://shopify.com)-level e-commerce today, if your team needs a visual CMS for non-technical editors, or if you want zero involvement with the technical layer. For those cases, platforms are still the right call.

## The Point

The real shift isn&apos;t &quot;build vs. buy.&quot; It&apos;s **transparency**.

A platform is a black box. You put content in, a website comes out. You can&apos;t see inside. AI can&apos;t see inside.

An AI-native site is transparent all the way down. Content is files. Design is code. Features are functions. Everything is readable, modifiable, and extensible — by you or by AI.

The question isn&apos;t whether you *can* build your own site. With vibe coding, anyone who can describe what they want can build one. The question is whether you want a site that grows with you, or one that stays exactly where the platform left it.

---

*Read next: [How to Choose a Vibe Coding Tool](/en/choosing-vibe-coding-tools) | [You Need the Box Before You Can Think Outside It](/en/when-ai-coding-makes-things-worse)*</content:encoded></item><item><title>為什麼你的下一個網站應該是 AI 原生的</title><link>https://www.detrouble.com/zh/why-build-your-own-site/</link><guid isPermaLink="true">https://www.detrouble.com/zh/why-build-your-own-site/</guid><description>平台是黑盒子，AI 看不進去。AI 原生網站是透明的——內容、設計、功能都能被 AI 讀取和修改。這種透明性會隨時間複利成長。</description><pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## 問錯了問題

大部分人在選擇平台還是自建時會問：*哪個更容易搭建？*

五年前這是對的問題。現在不是了。

2026 年的正確問題是：**你的網站跟 AI 的協作能力有多強？**

不是「它有沒有 AI 功能」。而是：AI 能不能讀取你整個網站、理解它的結構、修改它的設計、加入新功能、從你的實際工作中創建內容、然後部署變更——全部通過對話？

如果你的網站在 [Squarespace](https://squarespace.com) 或 [WordPress](https://wordpress.com) 上，答案是不能。你的內容在專有編輯器裡。你的設計被鎖在模板裡。AI 可以幫你*寫*一篇文章，但它不能幫你發布、排版、或在上面建一個支付系統。你還是那個中間人，在工具之間複製貼上。

## AI 原生到底是什麼意思

這個網站——[deTrouble](https://www.detrouble.com)——是我所說的 AI 原生。不是因為它用了 AI 功能，而是因為**每一層都對 AI 透明**。

![平台 vs AI 原生：根本差異](/images/platform-vs-ai-native.svg)

整個網站都是純檔案。內容是 Markdown。設計是 CSS。設定是一個檔案。部署是 `git push`。沒有資料庫，沒有管理後台，沒有專有格式。[Claude Code](https://claude.ai/code)——或任何 AI coding 工具——都能讀取和修改所有這些。

這不是技術細節，這是**全部的重點**。

## 內容（幾乎）自己寫出來

以下是讓我徹底放棄考慮平台的工作流程。

我用 Claude Code 做研發和開發工作。在這個過程中，它自然累積了上下文：專案程式碼、規劃文件、對話歷史、沿途做出的決策。當需要寫文章時，我不用從零開始。我說：

&gt; *「分析這個專案、我們的對話和我們做的決策，寫一篇文章。」*

Claude Code 已經有上下文了。它產出一篇草稿——有結構、經過事實查核、中英雙語。我審閱、給回饋、迭代。你正在讀的這篇文章就是這樣創建的。

![內容創作流程：研發 → 「寫一篇文章」→ 審閱迭代 → git push → 上線](/images/content-workflow.svg)

然後：`git push`。上線。不用登入 CMS。不用手動翻譯。**內容創作流程就是開發流程。** 這只有在 AI 能看到一切的情況下才可能。

在平台上，你最多只能叫 ChatGPT 寫點東西，然後手動貼到編輯器裡、手動排版、手動翻譯、手動按發布。每一步都是斷裂。每個斷裂都是摩擦。

## 通過對話加功能

內容只是開始。在平台上需要新功能，你搜尋 plugin——也許有，也許每月 $20，也許幾乎做到你想要的但不完全。

在 AI 原生的網站上，你進行一次對話。「加 [Stripe](https://stripe.com) 結帳」變成一個 [Cloudflare Worker](https://workers.cloudflare.com) 處理 webhook。「加分析功能」變成一個元件。「加一個跨子域名的登入系統」變成一個 auth Worker 搭配 [D1](https://developers.cloudflare.com/d1/) 和共享 cookie。

**每次的模式都一樣：** 描述需求 → AI 實現 → `git push` → 上線。沒有市集，沒有 plugin 相容性問題。限制不是平台允許什麼，限制是你能描述什麼。

## 複利效應

![擺脫平台的束縛](/images/blog-freedom.jpg)

大部分比較都忽略了這點：這個優勢**隨時間複利增長**。

第一個月，你的自建站和 Squarespace 看起來差不多。但到了第六個月，你的網站經歷了數十次對話的進化——新功能、改進的設計、從你實際工作中生成的內容。Squarespace 網站看起來跟第一個月一樣，或者你花了好幾個小時跟模板搏鬥只為做小改動。

每次跟 Claude Code 的對話都加深了它對你專案的理解。[記憶系統](https://claude.ai/code)保留了你的偏好、架構決策、內容風格。下一次變更比上一次更快。

平台不認識你的專案。它不從你的決策中學習。第 300 天的模板引擎跟第 1 天一模一樣。

## 不適用的場景

誠實很重要。這個方式不適合所有人。

它最適合能清楚描述需求的人、定期創作內容的人、和想要網站持續進化的人。你不需要會寫程式——但需要清晰地思考你在建什麼。

如果你今天就需要 [Shopify](https://shopify.com) 級別的電商、團隊有非技術編輯需要視覺化 CMS、或你完全不想接觸技術層——平台仍然是對的選擇。

## 重點

真正的轉變不是「自建 vs. 購買」。是**透明度**。

平台是黑盒子。你把內容放進去，網站出來。你看不到裡面。AI 也看不到。

AI 原生的網站從上到下都是透明的。內容是檔案。設計是程式碼。功能是函數。所有東西都可讀、可改、可擴展——無論是你還是 AI。

問題不是你*能不能*自己建網站。有了 vibe coding，任何能描述需求的人都能建。問題是你想要一個跟你一起成長的網站，還是一個停留在平台替你設定好的地方。

---

*延伸閱讀：[如何選擇 Vibe Coding 工具](/zh/choosing-vibe-coding-tools) | [先有框架，才能跳出框架](/zh/when-ai-coding-makes-things-worse)*</content:encoded></item><item><title>How to Choose a Vibe Coding Tool (Without Losing Your Mind)</title><link>https://www.detrouble.com/en/choosing-vibe-coding-tools/</link><guid isPermaLink="true">https://www.detrouble.com/en/choosing-vibe-coding-tools/</guid><description>Most AI coding tool comparisons focus on features. The distinction that actually matters is framework-model coupling — and whether the tool makes you more frustrated or less.</description><pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## The Noise

[Andrej Karpathy](https://x.com/karpathy) coined &quot;vibe coding&quot; in February 2025 — the idea of guiding AI through conversation rather than writing every line yourself. The concept resonated. The market responded. And now there are more AI coding tools than anyone can reasonably evaluate.

That&apos;s not the real problem.

![Navigating the noise of AI coding tools](/images/blog-coding.jpg)

The real problem is that most of what you&apos;ll find online — the tutorials, the demos, the &quot;build X in 5 minutes&quot; videos — aren&apos;t made by people doing real work. They&apos;re building landing pages, to-do apps, and marketing copy generators. The same shallow examples, recycled endlessly.

You watch. You try it. You think, *&quot;okay, that&apos;s neat.&quot;* And then nothing. No clarity on how it applies to your actual work. No understanding of why the same prompt produces great results in one tool and garbage in another.

That gap — between the demo and reality — is where most people get stuck.

## The One Distinction That Matters

Most discussions compare tools by features: autocomplete, chat, agent mode, model selection. That&apos;s surface-level.

The distinction that actually predicts your experience is this: **how tightly is the framework coupled to the model?**

Every AI coding tool has two layers:

- **The framework** — the interface you interact with. It handles file access, project context, planning, command execution, and workflow.
- **The model** — the AI that reasons, writes code, and makes decisions.

Some tools lock these together. Others let you mix and match. The choice between these two architectures shapes everything.

![Framework vs Model: two layers of every AI coding tool](/images/framework-vs-model.svg)

| Approach | Example | Trade-off |
|---|---|---|
| **Tightly coupled** | [Claude Code](https://claude.ai/code) (Claude only) | Less choice, deeper optimisation |
| **Loosely coupled** | [OpenCode](https://opencode.ai) (75+ models) | More choice, shallower integration |
| **Middle ground** | [GitHub Copilot](https://github.com/features/copilot) (GPT, Claude, Gemini) | Curated selection, official partnerships |

When the framework and model are built together, every interaction is optimised — the system prompts, the planning steps, the error recovery, the way context is gathered and fed back. The framework knows how the model thinks.

When they&apos;re separate, the framework sends generic API calls and hopes for the best. It works. But there&apos;s a ceiling.

Here&apos;s a concrete example. [Claude Code](https://claude.ai/code), before writing any code, enters a planning phase: it reads your project structure, traces import chains, examines existing test patterns, and maps dependencies. Only then does it start making changes. This workflow exists because the framework was designed around how [Claude](https://claude.ai) reasons — it knows Claude performs better with upfront context, so it gathers that context automatically.

A generic framework using the same model through an API won&apos;t do this. It doesn&apos;t know Claude&apos;s preferences. It just sends the prompt.

## The Frustration Test

![The real test happens during actual work](/images/blog-testing.jpg)

This is my actual method for choosing tools. It&apos;s simple, and it&apos;s more reliable than any benchmark.

**Use the tool on real work for a few hours. Pay attention to how you feel.**

Not whether the output is &quot;impressive.&quot; Not whether it handles a contrived demo well. How you *feel* while working.

&gt; If you find yourself getting frustrated — the tool misunderstood your intent, went off in a wrong direction, produced something you&apos;d never write, or required three follow-up prompts to correct — that&apos;s the signal.

The tool can&apos;t bridge the gap between your thinking and its output.

A good AI coding tool should feel like working with a capable colleague. You describe the goal. It figures out the approach. You review and adjust. The cycle should feel *natural*, not like wrestling.

This maps directly to the deTrouble principle: **technology should reduce friction, not create it.** If a tool adds cognitive overhead — if you&apos;re spending more energy managing the tool than doing the work — it&apos;s failed its basic purpose, regardless of what the benchmarks say.

The frustration test also reveals integration depth. Tightly integrated tools frustrate you less because the framework anticipates the model&apos;s behaviour. Loosely coupled tools are more likely to surprise you — and not in a good way.

## A Word on Benchmarks and Distillation

Speaking of benchmarks: be careful.

Some third-party models claim compatibility with tools like Claude Code by offering &quot;[Anthropic](https://anthropic.com)-compatible APIs.&quot; Behind the scenes, many of these models were trained through **distillation** — feeding massive volumes of Claude&apos;s outputs into their own training process to mimic its behaviour.

In February 2026, Anthropic [disclosed](https://www.cnbc.com/2025/02/03/anthropic-accuses-chinese-ai-companies-of-distilling-its-claude-model.html) that several providers had created over 24,000 fake accounts and generated more than 16 million conversations with Claude for exactly this purpose.

A distilled model might score well on standardised benchmarks. But benchmark performance and real-world reliability are different things. A model that has memorised patterns without understanding them will fail in novel situations — the exact situations where you need your tool most.

**Benchmarks tell you what a model can do in controlled conditions. The frustration test tells you what it does in yours.**

## What I Use

Being transparent about my choices and their trade-offs:

### Primary: Claude Code

The framework and model are built by the same team ([Anthropic](https://anthropic.com)). The integration is the deepest available. It plans before it acts, verifies its own output, and connects to external tools through [MCP](https://modelcontextprotocol.io).

It&apos;s not perfect:
- **Terminal-native** — if you&apos;ve never worked in a CLI, there&apos;s a learning curve
- **Cost** — requires a [subscription](https://claude.ai/pricing) or API usage fees
- **Regional availability** — not accessible everywhere
- **Claude-only** — you&apos;re committed to one model provider

But for me, the trade-off is clear. The output is consistently closer to what I intended, with less back-and-forth, than anything else I&apos;ve tried.

### Alternative: VS Code + GitHub Copilot (with Claude)

When Claude Code isn&apos;t an option, this is the setup I&apos;d recommend.

[VS Code](https://code.visualstudio.com) is the most widely used editor for good reason — 50,000+ extensions, monthly updates, stable, free. [GitHub Copilot](https://github.com/features/copilot)&apos;s agent mode (since VS Code 1.97) handles multi-file editing, terminal execution, and autonomous planning. And because [Microsoft](https://microsoft.com) and Anthropic have a formal partnership, Claude is available as an official model option in Copilot — the integration is maintained, not hacked together.

You can also run Claude Code&apos;s [VS Code extension](https://marketplace.visualstudio.com/items?itemName=anthropic.claude-code) alongside Copilot, giving you both workflows in one editor.

At $10/month for Copilot with Claude access, it&apos;s practical and well-supported.

## The Actual Point

Every few months, a new tool appears and the cycle restarts. New benchmarks. New demos. New hype. And people chase the latest thing without asking the question that actually matters:

*Does this tool help me think, or does it make me think about the tool?*

The best technology disappears into your workflow. You stop noticing it&apos;s there. It just works — like breathing air.

That&apos;s not a feature you&apos;ll find on any comparison chart. But it&apos;s the only one that matters.

---

*Read next: [Why Your Next Website Should Be AI-Native](/en/why-build-your-own-site) | [You Need the Box Before You Can Think Outside It](/en/when-ai-coding-makes-things-worse)*</content:encoded></item><item><title>Hello, deTrouble</title><link>https://www.detrouble.com/en/hello-detrouble/</link><guid isPermaLink="true">https://www.detrouble.com/en/hello-detrouble/</guid><description>The story behind deTrouble: a platform built on three principles — simplicity, utility, feasibility. Why technology should eliminate friction, not create it.</description><pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## The Problem

We live in an era of infinite tools. Every week, a new framework. Every month, a new platform. Every year, a new paradigm that promises to change everything.

And yet — are we actually getting more done? Or just getting more busy?

The real trouble isn&apos;t a lack of technology. It&apos;s the noise that technology creates when it&apos;s built without purpose.

![A calm workspace — clarity over clutter](/images/blog-workspace.jpg)

## The Thinking

The Chinese word 滌 (dí) means to cleanse, to wash away. 擾 (rǎo) means disturbance, trouble. Together, 滌擾 means to wash away trouble.

That&apos;s our philosophy: **technology should eliminate friction, not create it.**

Three principles guide everything here:

- **Simplicity** — If it needs a manual, it&apos;s not simple enough.
- **Utility** — Does it solve a real problem, or just a theoretical one?
- **Feasibility** — Can you actually ship this, today, with what you have?

## The Solution

deTrouble is a space for sharing ideas through the lens of problem-solving:

1. **Identify a pain point** — What&apos;s actually bothering people?
2. **Show the thinking** — How do you reason through the solution?
3. **Demonstrate the impact** — What changes when the trouble is gone?

Every article follows this structure. No fluff. No hype. Just problems met with thoughtful solutions.

## The Impact

Real technology should be like breathing air — you don&apos;t notice it&apos;s there, but it&apos;s quietly making everything possible.

We don&apos;t build tech for the sake of tech. We build to delete trouble.

Welcome to deTrouble.

---

*Read next: [How to Choose a Vibe Coding Tool](/en/choosing-vibe-coding-tools) | [Why Your Next Website Should Be AI-Native](/en/why-build-your-own-site) | [You Need the Box Before You Can Think Outside It](/en/when-ai-coding-makes-things-worse)*</content:encoded></item><item><title>如何選擇 Vibe Coding 工具（不讓自己抓狂）</title><link>https://www.detrouble.com/zh/choosing-vibe-coding-tools/</link><guid isPermaLink="true">https://www.detrouble.com/zh/choosing-vibe-coding-tools/</guid><description>多數 AI coding 工具比較只看功能。真正決定體驗的是框架與模型的耦合度——以及這個工具讓你更煩躁還是更順手。</description><pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## 噪音

[Andrej Karpathy](https://x.com/karpathy) 在 2025 年 2 月提出了「vibe coding」——用對話引導 AI 寫程式，而不是自己逐行撰寫。這個概念引起了共鳴，市場隨之爆發。現在的 AI coding 工具多到沒有人能合理地逐一評估。

但這不是真正的問題。

![在 AI coding 工具的噪音中找方向](/images/blog-coding.jpg)

真正的問題是，你在網上找到的大部分內容——教學、示範、「5 分鐘搭建 X」的影片——都不是真正在做開發的人做的。他們在建登陸頁面、待辦事項 app、行銷文案生成器。一樣的淺層示範，反覆循環。

你看了，試了，心想*「嗯，還行」*。然後就沒有了。不清楚這些跟你的實際工作有什麼關係。不理解為什麼同一個 prompt 在某個工具裡表現很好，換一個就變成垃圾。

這個落差——示範和現實之間——就是大部分人卡住的地方。

## 唯一重要的區別

大部分的工具比較都在看功能：自動補全、對話、agent 模式、模型選擇。這些都是表面的。

真正能預測你使用體驗的區別是：**框架和模型的耦合程度有多高？**

每個 AI coding 工具都有兩層：

- **框架**——你互動的介面。它負責檔案存取、專案上下文、規劃、指令執行和工作流程。
- **模型**——負責推理、寫程式和做決策的 AI。

有些工具把這兩層綁在一起，有些讓你自由搭配。這兩種架構之間的選擇，決定了一切。

![框架與模型：每個 AI coding 工具的兩層架構](/images/framework-vs-model.svg)

| 方式 | 例子 | 取捨 |
|---|---|---|
| **緊密耦合** | [Claude Code](https://claude.ai/code)（僅限 Claude） | 選擇少，最佳化深 |
| **鬆散耦合** | [OpenCode](https://opencode.ai)（75+ 模型） | 選擇多，整合淺 |
| **中間路線** | [GitHub Copilot](https://github.com/features/copilot)（GPT、Claude、Gemini） | 精選模型，官方合作 |

當框架和模型一起打造時，每個互動都經過最佳化——系統提示、規劃步驟、錯誤修復、上下文的收集和回饋方式。框架知道模型如何思考。

分開的時候，框架發送通用 API 呼叫，然後祈禱結果好。能用，但有上限。

舉個具體的例子。[Claude Code](https://claude.ai/code) 在寫任何程式之前，會進入規劃階段：讀取你的專案結構、追蹤 import 鏈、檢查現有測試模式、映射依賴關係。然後才開始修改。這個工作流程存在的原因是框架圍繞 [Claude](https://claude.ai) 的推理方式而設計——它知道 Claude 在有充足上下文時表現更好，所以自動收集這些上下文。

一個通用框架透過 API 使用同一個模型，不會做這些事。它不了解 Claude 的偏好，只是發送 prompt。

## 煩躁測試

![真正的測試發生在實際工作中](/images/blog-testing.jpg)

這是我選擇工具的實際方法。很簡單，但比任何跑分都可靠。

**用這個工具做幾個小時的真實工作。注意你的感受。**

不是看產出「厲不厲害」，不是看它處理精心設計的示範好不好。而是你在工作時的**感受**。

&gt; 如果你開始覺得煩躁——工具沒聽懂你的意圖、走偏了方向、寫出你絕不會寫的東西、或者需要三次追問才能修正——這就是信號。

工具無法銜接你的思考和它的產出。

一個好的 AI coding 工具應該像跟一個能力出色的同事合作。你描述目標，它想出方法。你檢視和調整。這個循環應該感覺**自然**，而不是在搏鬥。

這直接對應 deTrouble 的原則：**科技應該減少摩擦，而非製造摩擦。** 如果一個工具增加了認知負擔——你花更多精力在管理工具而不是做事——那它就失敗了，不管跑分多好看。

煩躁測試同時也揭示了整合深度。緊密整合的工具讓你更少煩躁，因為框架能預判模型的行為。鬆散耦合的工具更容易讓你措手不及——而且不是好的那種驚喜。

## 關於跑分和蒸餾的提醒

說到跑分：小心。

有些第三方模型宣稱相容 Claude Code 之類的工具，提供「[Anthropic](https://anthropic.com) 相容 API」。背後很多是通過**蒸餾**——大量餵入 Claude 的輸出到自己的訓練過程，模仿其行為。

2026 年 2 月，Anthropic [公開揭露](https://www.cnbc.com/2025/02/03/anthropic-accuses-chinese-ai-companies-of-distilling-its-claude-model.html)數家供應商建立了超過 24,000 個假帳號，與 Claude 進行了超過 1,600 萬次對話，就是為了這個目的。

蒸餾出來的模型可能在標準化跑分上表現不錯。但跑分表現和真實使用中的可靠性是兩回事。一個記住了模式但沒有理解的模型，會在新場景中失敗——而那些正是你最需要工具幫忙的時候。

**跑分告訴你模型在受控條件下能做什麼。煩躁測試告訴你它在你的條件下做得如何。**

## 我的選擇

誠實地說明我的選擇和對應的取捨：

### 主力：Claude Code

框架和模型由同一個團隊（[Anthropic](https://anthropic.com)）打造。整合深度是目前最高的。它在行動前規劃、驗證自己的產出、通過 [MCP](https://modelcontextprotocol.io) 連接外部工具。

不完美的地方：
- **終端原生**——如果你沒用過 CLI，有學習門檻
- **費用**——需要[訂閱](https://claude.ai/pricing)或 API 使用費
- **地區限制**——不是所有地方都能用
- **僅限 Claude**——你綁定了一個模型供應商

但對我來說，取捨是明確的。產出始終更接近我的意圖，來回修改更少，比我試過的任何其他工具都好。

### 替代方案：VS Code + GitHub Copilot（搭配 Claude）

當 Claude Code 不可用時，這是我推薦的組合。

[VS Code](https://code.visualstudio.com) 是使用最廣泛的編輯器，理由很充分——50,000+ 擴充套件、每月更新、穩定、免費。[GitHub Copilot](https://github.com/features/copilot) 的 agent 模式（自 VS Code 1.97 起）支援多檔案編輯、終端執行和自主規劃。因為 [Microsoft](https://microsoft.com) 和 Anthropic 有正式合作關係，Claude 作為官方模型選項在 Copilot 中可用——整合是經過維護的，不是拼湊出來的。

你也可以在 VS Code 裡同時跑 Claude Code 的[擴充套件](https://marketplace.visualstudio.com/items?itemName=anthropic.claude-code)，在一個編輯器中擁有兩種工作流程。

Copilot 每月 $10 就包含 Claude 模型存取，務實且有良好支援。

## 真正的重點

每隔幾個月，新工具出現，循環重新開始。新跑分、新示範、新炒作。人們追逐最新的東西，卻沒問過真正重要的問題：

*這個工具是幫助我思考，還是讓我花時間去想工具本身？*

最好的科技會消融在你的工作流程裡。你不再注意到它的存在。它就是能用——像呼吸空氣一樣。

這不是任何對比表格上會列出的功能。但它是唯一真正重要的。

---

*延伸閱讀：[為什麼你的下一個網站應該是 AI 原生的](/zh/why-build-your-own-site) | [先有框架，才能跳出框架](/zh/when-ai-coding-makes-things-worse)*</content:encoded></item><item><title>你好，滌擾</title><link>https://www.detrouble.com/zh/hello-detrouble/</link><guid isPermaLink="true">https://www.detrouble.com/zh/hello-detrouble/</guid><description>deTrouble 的起源故事：建立在簡約、實用、可行三個原則上的平台。為什麼科技應該消除摩擦，而不是製造摩擦。</description><pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate><content:encoded>## 痛點

我們活在一個工具無限的時代。每週一個新框架，每月一個新平台，每年一個號稱改變一切的新範式。

然而——我們真的做到更多了嗎？還是只是變得更忙了？

真正的問題不是缺乏科技，而是科技在沒有目的的情況下被建造時所製造的噪音。

![清晰勝於混亂](/images/blog-workspace.jpg)

## 思考過程

滌，是洗去、清除的意思。擾，是煩擾、干擾的意思。滌擾，就是洗去煩擾。

這就是我們的哲學：**科技應該消除摩擦，而不是製造摩擦。**

三個核心原則：

- **簡約（Simplicity）**— 如果需要說明書，那就不夠簡單。
- **實用（Utility）**— 它解決的是真實的問題，還是理論上的問題？
- **可行（Feasibility）**— 用你手上的資源，今天就能交付嗎？

## 解決方案

deTrouble 是一個透過問題解決視角來分享想法的空間：

1. **指出痛點** — 真正困擾人的是什麼？
2. **展示思考** — 如何推理出解決方案？
3. **呈現影響** — 當問題消失後，什麼改變了？

每篇文章都遵循這個結構。沒有廢話，沒有炒作，只有問題與深思熟慮的解決方案。

## 影響

真正的科技，應該是呼吸空氣般存在——你察覺不到它的存在，但它卻默默地讓一切成為可能。

我們不為科技而科技，我們為消除煩擾而建造。

歡迎來到滌擾。

---

*延伸閱讀：[如何選擇 Vibe Coding 工具](/zh/choosing-vibe-coding-tools) | [為什麼你的下一個網站應該是 AI 原生的](/zh/why-build-your-own-site) | [先有框架，才能跳出框架](/zh/when-ai-coding-makes-things-worse)*</content:encoded></item></channel></rss>