AI 如何為我節省了 30 分鐘
作者分享了自己從對 AI 抱持懷疑態度,到透過一次實際應用(解決生產環境問題)而對 AI 產生更多信任,並提升了提示技巧的個人經驗。
How AI Saved Me 30 Minutes
This title may seem tongue-in-cheek in a world where seldom a day goes by without hearing a friend, colleague, or a viral post touting how AI saved them hours, if not days, of engineering work. In fact, just this week a friend humbly bragged to me about how he used AI to implement a self-hosted DNS server in Go, and he didn't even know Go!
Alas, the title is accurate. Ironically, I've been too busy to play around with AI. Moreover, the use of AI requires a measure of 'letting go' which is difficult for me to stomach. In the rare moments I've decided to test AI, I've been left unimpressed and concluded that I shouldn't have bothered with it the first place. I blamed AI for not being ready for my needs. However, as I keep hearing others' positive testimonies, I'm changing my tune: maybe I'm the one not ready for AI.
In this post I want to share how I took a small step towards improving my relationship with AI. More crucial than saving 30 minutes, I developed a little more trust in AI output and also got better at prompting. Most importantly, my limiting beliefs around AI eroded a bit.
I've realized that learning how to use AI is like climbing with a new climbing partner. We can't expect to climb K2 together in the first year. That's setting the relationship up for a failure. Maybe my initial tests with AI fell into this trap. Instead we need to first build trust on easier climbs, learn how the other communicates, before embarking on more challenging routes.
What I describe below is an 'easy climb'.
On Monday evening, I deployed what I thought would be an innocuous change and called it a day. The following morning, I noticed an unusual increase in 500s during my routine observability checks. It wasn't high enough to alert me, but ~200 users had been affected over 12 hours. I quickly deployed the fix. Some head-shaking may have been involved. Life of a solo developer.
As is my custom whenever something like this happens, I send the affected users an email apologizing for the technical issue and asking them to try again whatever they were doing before. This is where I decided to play with AI. Based on my previous experiences, I wasn't hopeful.
The climbing K2 version of my prompt to AI would've been: 'figure out the users who've been affected by this bug and send them an apology email'. Disappointment guaranteed. Instead, I broke things down and used AI for well-defined, constrained tasks.
First, I manually grabbed all the requests from Newrelic which errored out:
Converting this to JSON, I got the following payload:
Not every request URI gives me information about the affected user. I was able to successfully prompt the LLM to filter out events in the JSON which didn't give me any user information. The LLM was also helpful in parsing the JSON. Some examples of prompts where the LLM did things right:
The last prompt didn't work as I expected. The LLM responded:
I said:
To my pleasant surprise, the LLM responded:
Indeed my IDE was not sharing the full JSON file. When it did, the LLM performed as expected. Moments like this won my confidence. I especially liked how the LLM shared it's thought process with me.
The LLM also generated the code that queried my database for the target users and enqueued the emails to them. What impressed me was that it was familiar with my ORM syntax, models, relationships between the models, and model attributes. It also added print statements and comments for easier readability. Save for one minor issue, the code worked on the first attempt.
Seasoned AI users may see the above and be like:
But this was my first taste of success with AI. I cannot wait to use it again for something a bit more challenging.
Meanwhile, I hope this post convinces more people to give AI another chance for the 'easy climbs'.
相關文章
其他收藏 · 0