Documenting AI Conversations in Hugo
I find myself asking AI models questions constantly while working — debugging, architecture decisions, quick syntax lookups. Most of that knowledge disappears into chat history. So I built a Hugo shortcode to embed those conversations directly into blog posts.
The shortcode
Drop ai into any post with a prompt parameter and the response as the inner content. It renders a clean chat-bubble layout that fits the site theme.
Basic example
docker stop and docker kill?docker stop sends a SIGTERM first, giving the container time to shut down gracefully (default 10 seconds), then sends SIGKILL if it hasn’t stopped.
docker kill sends SIGKILL immediately, forcing an instant shutdown with no cleanup.
Use stop for production workloads. Use kill when a container is hung and won’t respond to SIGTERM.
Specifying the model
The model parameter defaults to “Claude” but you can override it for other models.
terraform plan or apply, Terraform acquires a lock on the state backend (e.g. an S3 bucket with a DynamoDB table, or GitLab’s HTTP backend). If another process is already holding the lock, your command will wait or fail. This ensures that only one person or CI job can modify infrastructure at a time, preventing race conditions and partial applies.Code in responses
Markdown works inside the response body, including fenced code blocks.
Here’s a multi-stage build that keeps the final image small:
FROM hugomods/hugo:exts as build
COPY . /src
RUN hugo --minify
FROM nginx:alpine
COPY --from=build /src/public /usr/share/nginx/html
The first stage builds the site, the second serves it with nginx. The final image is around 40MB.
Usage
{{< ai prompt="Your question here" >}}
The AI response goes here. **Markdown** works.
{{< /ai >}}
| Parameter | Default | Description |
|---|---|---|
prompt | (required) | The question you asked |
model | Claude | Label for the AI model |
The post is marked as draft: true — flip it to false when you’re ready to publish.
Comments
Loading comments...