10-09-Daily AI News Daily
19 Days to Launch a Website, Plugin, and Desktop App Suite: An Unconventional AI Coding Practice Report
So, remember my AI Coding invitation? Well, after all these days, the project’s finally live, and my head’s clear! Forget the sentimentality and the personal journey for today. Let’s get hardcore and talk frankly about how I used AI as my ‘pair programming’ buddy to turn the PromptHub project from just an idea into a stack of runnable code in just 19 days. ✨
This report? Yeah, it’s probably not what you’re expecting from those ‘Vibe Coding’ videos. There’s zero magic here—just pure engineering, tough trade-offs, and a whole lot of ‘aha!’ moments after tripping over every single pitfall imaginable. 🚧
Architecture Selection: AI as the “Scaffolding,” Me as the “Decision-Maker”
PromptHub? From the jump, I hit it with a pretty wild goal: build for Web, Chrome extension, and Electron desktop—all at the same damn time! Backend? Next.js API Routes. Database? SQLite to start, but ready to swap to a production setup whenever. 🚀
This tech stack, honestly, used to be a nightmare for me. Back in the day, just getting package.json balanced and tsconfig.json squared away for all those different environments? That alone would’ve sent me spiraling. 🫠
My approach? Simple: I treat AI like a super-advanced scaffolding generator. I’m not asking it ‘what tech should I use?’ Nah, I’m just giving it direct orders: 👇
“I need a Next.js project, using TypeScript. Integrate Drizzle ORM, with SQLite for the database. Add JWT authentication, implement Google and GitHub OAuth login. Then set up the Stripe billing framework and leave me the interfaces ready.”
On September 17th, AI spent roughly an afternoon on a basic multi-language template project and, voilà, it spat out a fully functional backend framework for me! This wasn’t just a few code snippets; this was architecture made real. It crushed all the super tedious, repetitive ‘glue code,’ letting me dive headfirst into the core business logic. 🤯
My first takeaway? Easy: In a project’s early days, AI’s ultimate superpower is just wiping out all that ‘startup friction.’

Development Methodology: Ditch “Vibe Coding,” Embrace “Atomic Tasks”
Those ‘Vibe Coding’ (one-liner development) videos you see online? Honestly, just watch ’em for kicks, ‘cause anyone who buys into that is seriously naive. 🤪 Real, enterprise-level project development? That’s all about rigorous engineering. The pattern I’ve stumbled upon—I like to call it ‘Atomic Tasks’ or ‘Building Blocks’—is the real deal.
When I’m cooking up a new feature, I break the whole process down to its absolute bare bones. Then, I pop open a bunch of AI chat windows and get them all chugging along in parallel, like so: 👇
• Window A (Database Expert): “Based on my requirements, design the
promptstable structure and write it out using Drizzle ORM syntax.”• Window B (Backend Expert): “Here’s the table structure. Write me the corresponding CRUD APIs, implement them using Next.js API Routes, and ensure proper permission checks.”
• Window C (Frontend Expert): “Here are the API interfaces. Using React and Tailwind, write me a management page component that can call these interfaces.”
So, the perks of this pattern are pretty sweet: 👇
1. Context Isolation: Each AI window focuses on just one thing, so it doesn’t get “mentally confused” by an overly long context.
2. Single Responsibility: Code decoupling is super clean, meaning AI rarely spits out spaghetti code that mixes frontend and backend.
3. Parallel Efficiency: While I’m waiting for the backend APIs to be written, I can already start brainstorming frontend components.
My second big takeaway? Don’t even think about treating AI like some all-knowing, all-powerful deity. Instead, view it as a small, specialized team made up of multiple ‘domain experts.’ 🧠

Hardcore Pitfalls: Moments Even AI Couldn’t Save My Butt
AI Coding? It’s no silver bullet, trust me. 🚫 In certain areas, especially when it comes to low-level stuff or configurations, AI somehow manages to mess up even worse than I do. Seriously.
1. Database Selection: Turso vs. Supabase
Initially, just for kicks, I gave the distributed Turso database a shot. Sounds cool, right? But man, the data synchronization delay was beyond ridiculous—a user would create a prompt, and it just wouldn’t show up even after refreshing a gazillion times. 😵💫 I even tried slapping on the consistency=strong parameter, but it did absolutely nothing. Zero.
So, I straight-up ditched it and swapped back to PostgreSQL-based Supabase. 🔄 That kind of call? AI can’t make it for you. You gotta deeply understand a database’s consistency model to grasp why Turso’s async vibe fundamentally clashed with my whole business scenario.

2. Next.js’s useEffect Infinite Loop
This? Oh man, this is a classic problem. 🤦♂️ On the management page, the API was getting hammered in an infinite loop. I tossed the code to Qwen3, and it futzed with it for ages, but nope, still no fix.
Ultimately, I had to get my hands dirty and fix it myself. I dug into the useEffect dependencies and realized there were just too many dynamic states mixed in, creating a nasty chain reaction. So, I manually refactored it, keeping only the absolute core user?.personalSpaceId as a dependency. Boom! Problem solved. ✅
After that, I made sure to feed the correct solution right back to the AI, telling it: ‘Hey, next time you hit a snag like this, this is how you fix it.’ Essentially, I was reverse-training the AI, schooling it on my best practices. 🧑🏫

3. Chrome Extension’s Permission Black Hole
When it came to extension development, AI was basically a total noob. content.js flat-out refused to load, localStorage data just wouldn’t communicate… every single answer AI spit out for these issues? Dead wrong. 😤
Ultimately, I just had to humble myself, crack open the Chrome developer docs, and actually figure out the difference between host_permissions and scripting permissions. Only then did I finally squash that bug. 🤓
My third crucial takeaway? AI’s a whiz at ‘implementation,’ no doubt, but it seriously struggles with ‘decision-making’ and ‘debugging.’ Especially when you’re deep in the weeds with underlying principles, platform quirks, or performance bottlenecks—the final call and the real debugging? That’s still all you, buddy. 🛠️
My Model “Toolbox”
Me? I never blindly put all my faith in just one model. My game plan is dynamic switching: use the right tool for the right job, every single time. 🧰
• Architecture Design & Complex Bug Fixing: Gemini 2.5 Flash is my go-to. It’s free, and it works wonders for tricky issues like Next.js hydration errors.
• UI/UX Code Implementation: Claude 4.1 is the undisputed champion here. Its CSS aesthetics and code implementation skills are top-notch, though it is the priciest, so I only bust it out for critical pages.
• Daily CRUD and Component Development: Qwen3 Coder Plus offers the best bang for your buck, a true workhorse that never complains about the grind.
• Data Processing and Script Generation: When reverse-engineering Google AI Studio’s API for data migration, I used Kilo paired with Gemini. It analyzed JSON structures and automatically whipped up Python scripts with insane efficiency.

So, all in all, these 19 days of development? Less ‘AI programming,’ more like ’extreme human-machine collaborative programming.’ Think of AI as that blazing-fast intern cranking out code, and me? I’m the architect, constantly steering the ship, making the big calls, and always ready to jump in and save the day when things go sideways. 🚀👨💻
Under this model, a developer’s core value totally shifts. It’s less about ‘writing code’ and more about ‘asking the right questions,’ ‘making smart decisions,’ and ‘crushing system design.’
This, folks, might just be what our future as developers is all about. Wild, right? 🤔
Here’s the website link, feel free to give it a try: https://prompt.hubtoday.app/
Oh, and you can also hit me up on WeChat to join the discussion group: justlikemaki