Notez 0.2: Finally Putting AI to Work for Serious Writing

November 16, 2025 (2d ago)
writing
AI assistant
productivity
workflow
product update

From ‘occasionally trying AI’ to ‘confidently using it every day’, Notez 0.2 brings a series of practical upgrades around editor–AI collaboration, knowledge organization, and retrieval for serious writing workflows.

If we had to summarize the 0.2 series in one line: we want AI to be something you can safely, reliably, and meaningfully rely on in serious writing—not just a toy you call up for short bursts of inspiration.

1. Starting from real pain: the tiny frictions that slow down serious writing

Over the past months, we’ve been writing our own changelogs, product copy, and long-form content inside Notez, while closely reading your feedback from the community, emails, and in-app conversations.

What surfaced repeatedly wasn’t “one big missing feature”, but a handful of small, high-frequency frictions that directly drag down your throughput:

1.1 Tool hopping: constantly jumping between editor and AI

  • The usual pattern: You write in the editor, then jump to a chat window for help, type a prompt, copy the result, paste it back into the document.
  • The hidden cost: Every jump is a context break—you have to remember where you were, relocate your cursor, and fix formatting by hand.
  • The deeper issue: The AI has no live view of your writing context, and you have no way to adjust what it sees in real time.

1.2 Limited control: either “seeing too much” or “seeing too little”

  • Typical scenario: You just want to rewrite a single paragraph, but the AI looks at the entire document and drifts away from your intent.
  • The opposite: You want the AI to respect the global logic of your argument, but it only sees the chunk you pasted.
  • The core tension: There’s no way to tune the AI’s field of view—you’re stuck between overly local and overly global.

1.3 Awkward flow: generated text and your draft don’t really work together

  • Status quo: AI outputs → you manually judge → accept or discard. It’s a one-way handoff.
  • What’s missing:
    • No quick way to refine a result (you have to prompt again)
    • Edits in the document don’t feed back into the AI’s understanding
    • Multi-turn chats drift away from what’s actually in your draft
  • Net effect: AI feels like a fast but disposable tool, not a stable collaborator.

The common thread isn’t “we need something disruptive”. It’s that for serious writing, people care more about stability, control, and not having their thinking flow interrupted.

2. The 0.2 roadmap: practical upgrades around collaboration, control, and organization

Across releases from 0.2.1 to 0.2.14, we didn’t try to boil the ocean. Instead, we focused on one clear direction:

  • Make the editor and AI work together more smoothly.
  • Give writers finer-grained control over what the AI can see.
  • Bring local knowledge and retrieval closer to real writing workflows.

2.1 Stronger linkage: editor and chat no longer live in different worlds

Core idea: Real-time, two-way linkage between the editor and chat.

  • From editor to chat: Select any text, start a conversation in one click, with context carried over automatically.
  • From chat back to editor: Insert AI responses directly at the cursor or use them to replace a selection.
  • The key shift: This isn’t “automating copy-paste”, it’s keeping your document and AI conversation in the same semantic space.

What this means for you:

  • Your train of thought doesn’t get broken by tool switching.
  • The AI is always “on site” with your document, not answering in isolation.
  • Chat history and document versions stay naturally aligned.

0.2.12: First shipped editor ↔ chat linkage.
0.2.13: Refined the interactions for more consistent behavior.

2.2 Adjustable context window: move smoothly between “local” and “global”

Core idea: Let you tune how much context the AI can see when helping you edit.

  • Real-world example: You’re writing a 5,000-word paper and only want help polishing a section in chapter 3.
  • Old problem: Either the AI sees the whole document (too noisy) or just the snippet you pasted (too blind).
  • Our approach:
    • Three context modes: selection only / selection + nearby paragraphs / entire document.
    • Visual preview: A clear view of exactly what the AI is about to read.
    • Smart defaults: Recommended modes based on the action (continue writing, rewrite, summarize, etc.).

What this means for you:

  • You can precisely control where the AI’s “attention” goes.
  • The model is less likely to wander off your intent.
  • You can move freely between local precision and global coherence.

0.2.13: Introduced customizable AI editing context.
0.2.12: Expanded the context window for “continue writing” to improve continuity.

2.3 Upgraded selection actions: turn “select” into the primary AI entry point

Core idea: Treat the text you select as the first-class interface for AI operations.

New capability matrix:

ActionOld wayNotez 0.2 way
Rewrite selectionCopy → open chat → paste → tweakSelect → right-click → choose a preset prompt
Search from selectionManually craft keywords → search → compareSelect → one-click search → insert labeled snippets
Chat about selectionCopy → start new chat → paste contextSelect → start chat (context attached automatically)
Run custom promptCopy → hand-build prompt → submitSelect → pick your custom prompt → run instantly

Design principles:

  • Less manual copy-paste: This is where cognitive load silently accumulates.
  • Automatic context handling: The AI should always “know where you are and what you’re doing”.
  • Action = intent: Selecting + clicking is often a clearer signal than a long natural language prompt.

0.2.14: Added selection-based search and chat.
0.2.12: Introduced selection-based custom AI actions.

2.4 Files and retrieval: making your knowledge actually usable

Core idea: File structure + retrieval + traceability that support active writing, not just storage.

Reframing the problem:

  • Old mindset: “I have a lot of documents” → a passive archive.
  • New mindset: “How can my existing knowledge actively participate in this draft?” → requires structure, retrieval, and trustworthy references.

Three key capabilities:

  1. Folder system (0.2.13)

    • Not just for neatness; folders become retrieval scopes.
    • Choose whether to search within the current folder or across everything.
    • Gradually build topic- or domain-specific semantic spaces.
  2. Search traceability (0.2.14)

    • Every result comes with its source file and location.
    • Jump to the original context in a single click.
    • Build drafts with verifiable references rather than “AI says so”.
  3. Table support (0.2.14)

    • Native editing for markdown tables.
    • AI can help generate, extend, and analyze structured data.
    • No more hand-fighting markdown pipes and alignment.

What this means for you:

  • Your notes stop being a sleeping archive and become real-time assets.
  • You move from “finding files” to “activating relevant fragments”.
  • Citation and traceability become part of your normal writing routine.

3. A before/after snapshot: from manual juggling to smooth collaboration

Scenario: writing a 3,000-word technical article

Before Notez 0.2: the cognitive cost map

1. Gather material (15 minutes)
   - Open 5 different docs and skim
   - Copy key passages into a scratch pad

2. Draft an outline (10 minutes)
   - Manually type headings into your editor
   - Reshuffle based on your scratch pad

3. Fill in content (60 minutes)
   - Get stuck mid-paragraph → switch to an AI tool
   - Copy the recent context + write a prompt → wait for output
   - Copy result back → fix formatting
   - Repeat 10+ times

4. Fact-check and referencing (20 minutes)
   - Manually search source docs
   - Cross-check quotes and links
   - Insert citations one by one

5. Style consistency (15 minutes)
   - Notice arguments and tone drift across sections
   - Rewrite multiple paragraphs for consistency

Total: ~120 minutes, with at least 40 minutes lost to tool switching and manual juggling.

With Notez 0.2: a more integrated flow

1. Import material (2 minutes)
   - Drag 5 docs into Notez
   - Group them into a dedicated folder

2. Draft the outline (5 minutes)
   - Type your high-level sections
   - Select a heading → ask: “Based on existing material, what should this section cover?”
   - Insert AI-suggested structure with source hints

3. Fill in content (35 minutes)
   - When you need help continuing a paragraph → use “Continue writing” (context auto-handled)
   - Need a cleaner version of a paragraph → select it → run your “formal rewrite” preset
   - Need supporting material → select a sentence → search → insert snippets with traceability
   - All of this happens inside the editor—no window hopping

4. Verify references (5 minutes)
   - Click traceability markers on AI-generated content
   - Jump to original sources to confirm
   - Pin important conclusions to steer future generations

5. Keep style consistent (handled gradually)
   - Pinned key definitions and patterns act as style anchors
   - Later generations naturally align to what you’ve already approved

Total: ~50 minutes, with far fewer context breaks and almost no manual copy-paste.

Key differences:

  • It’s not just about “writing faster”, it’s about stripping away low-value operations like tool hopping and reformatting.
  • It’s not “AI writes the whole thing for you”, it’s about well-timed, context-aware assistance when you’re stuck, refining, or validating.
  • It’s not “yet another AI feature layer”, it’s an editor, search, and chat working together as a single loop.

4. Product principles behind 0.2: constraints we stick to

Throughout the 0.2 series, we’ve deliberately held ourselves to three constraints:

4.1 Visibility over automation

Counter-example: A big “auto-complete the whole document” button looks impressive, but:

  • You lose control over individual steps.
  • Output quality becomes unpredictable.
  • When something feels off, it’s hard to trace where it went wrong.

Our choices:

  • Always show what context will be sent to the AI.
  • Always attach traceable references to AI-generated content where applicable.
  • Let you intervene or adjust at any point in the chain.

4.2 Collaboration over replacement

Counter-example: “Fully automated report generation” sounds magical, but in practice:

  • Content often lacks depth and nuance.
  • Tone and argumentation drift across sections.
  • You may end up relying less on your own judgment.

Our choices:

  • AI is always a candidate next step, not the final say.
  • We optimize for “help me continue, refine, or verify”, not “do it all for me”.
  • Every confirmation you make teaches the system more about your style and standards.

4.3 Gradual progress over disruption

Counter-example: Forcing people to learn a brand-new tool paradigm.

Our choices:

  • Keep the core Markdown editing experience familiar.
  • Let new AI features emerge from natural actions like selecting text and right-clicking.
  • Ensure your workflow still makes sense even if you don’t use AI on a given day.

5. Where Notez 0.2 shines: a few concrete use cases

Use case 1: research papers and long-form analysis

Workflow:

  1. Import your papers into a Research folder.
  2. Draft your introduction manually.
  3. When you need a citation, select the relevant sentence → search → insert source-backed snippets.
  4. For methods sections, set the AI’s context to “this chapter only” to avoid noise from the whole doc.
  5. Pin key definitions so later generations keep your terminology consistent.

Core value: Traceable citations and consistent terminology.

Use case 2: product documentation and release notes

Workflow:

  1. Organize docs into modules (User Guide / API / FAQs, etc.).
  2. When an API changes, select the new behavior → ask: “Which docs does this impact?”
  3. Let AI suggest relevant sections based on folder-scoped search.
  4. Jump through them and use a “technical doc tone” preset to rewrite for consistency.

Core value: Cross-document consistency without manual hunting.

Use case 3: recurring long-form content (columns, series, books)

Workflow:

  1. Import past articles into a Previous Columns folder.
  2. Start the new piece with a few paragraphs in your own voice.
  3. When you need to recall old arguments, select a theme → search → insert snippets with links back.
  4. For continuity, let the AI see the whole document when expanding later sections.
  5. Select key paragraphs and ask the AI to “expand with examples” where needed.

Core value: Style and argument continuity across a series.

6. Small but important fit-and-finish work

Not everything in 0.2 is headline-grabbing. Some of the most impactful changes are quiet quality-of-life improvements:

System tray background saving (0.2.14)

  • Pain: Losing unsaved work after closing a window by accident.
  • Solution: Minimize to tray + background autosave + gentle restore prompts.
  • Impact: Your writing feels safer by default.

Better behavior behind VPNs/proxies (0.2.13)

  • Pain: Sync and model calls being flaky on proxied networks.
  • Solution: Smarter network environment detection + connection pooling tweaks.
  • Impact: More reliable usage across borders and constrained environments.

Clearer heading hierarchy for long docs (0.2.13 & 0.2.14)

  • Pain: Messy spacing and unclear hierarchy in long documents.
  • Solution: Refined heading styles and vertical rhythm.
  • Impact: Long reading and editing sessions feel less tiring.

Better support for reasoning-focused models (0.2.12)

  • Pain: Models like o1-preview not streaming or displaying their reasoning cleanly.
  • Solution: Improved streaming pipeline and thinking-process visualization.
  • Impact: You can actually harness the extra reasoning power instead of fighting the UX.

Common pattern: These changes don’t shout for attention, but they quietly remove friction.

7. What 0.2 doesn’t solve yet—and where we’re headed

Known challenges

  1. Cross-document semantic conflicts

    • When different documents disagree, how should the system prioritize?
    • Today, you can use pins as anchors; in the future, we want smarter conflict surfacing.
  2. Not all references are equally reliable

    • Some sources should count more than others.
    • We’re exploring ways to score references based on provenance and contextual relevance.
  3. Balancing local and cloud models

    • Different models excel at different tasks.
    • We want Notez to recommend when a lightweight local model is enough, and when a stronger remote model is worth it.

Looking ahead to 0.3

  • Editor core overhaul: Smoother layout, more powerful tables, more natural block editing.
  • Deeper search: Hybrid semantic + keyword retrieval.
  • Custom editor buttons: Turn your favorite prompts into one-click toolbar actions.

8. Wrapping up: putting AI where your real work happens

The heart of Notez 0.2 isn’t “more AI features”. It’s about putting AI into the place where your actual work happens: inside the writing flow itself.

  • From “calling AI in occasionally” → to “having it quietly present when needed”.
  • From “dump the whole doc into a black box” → to “share just enough context, by design”.
  • From “copy → paste → prompt → copy back” → to “select as intent, with one-click actions”.
  • From “documents as a static archive” → to “knowledge snippets actively shaping your draft”.

If what you want is to enjoy the upside of AI without giving up privacy, control, or rigor in your work, the 0.2 series is a concrete step in that direction.


If you still find yourself thinking, “Do I really need another AI writing tool?”, there’s a different question that might be more honest:

When was the last time I felt annoyed because I had to switch tools, copy text around, or reconstruct context just to get a bit of help?

If the answer is “pretty often”, Notez 0.2 is built exactly for those moments of friction.

Serious writing deserves a space that doesn’t fragment your attention.

We’d love for Notez to be that space—and for AI to finally feel like it belongs there.