Drafts Reference GPT

I’m experimenting with tuning a custom GPT in ChatGPT designed to be helpful at assisting with Drafts questions, particularly in action creation.

It’s a work in progress, but should be more accurate than asking similar questions of one of the default ChatGPT models, as it heavily weights the official documentation and directory examples, and is less likely to suggest things like JavaScript functions that are not available in non-browser JS implementation, etc.

Interested in feedback if anyone tries it out, especially if it leads you astray or gives bad examples so I can further tune it.

9 Likes

Ooh I will probably use this. ChatGPT has already taken some “Hm this action is going to need some JavaScript and I don’t want to type that on my phone right now” moments and given me something I can use. It managed to retain some Drafts specifics from its training on the web I presume, without doing a search in the chat. It did hallucinate a little though and forget it wasn’t Node. I’ll tell ChatGPT to remind me to try this instead next time I ask it for help with an Action!

So, I have tried this out, and it has worked very well for me.

I’m not a programmer, and I know next to nothing about coding, JavaScript, etc.

I’ve been using ChatGPT to create or help me modify a couple of actions, mostly to good effect.

In this particular instance, I used ChatGPT to create a new action that exports notes from Drafts to an OPML file, which can then be opened in MindNode, the mind-mapping app.

The action allows me to select notes by tag(s); it reformats the title to ensure each note follows a consistent hierarchical numbering system (think Luhmann-style Zettelkasten numbering). The OPML file is saved to a bookmarked location.

In total, it took me several long days spread over several weeks to get a working version. ChatGPT seemed to regularly “forget” some of the fundamental requirements for JavaScript actions in Drafts, but I got there (and I guess learnt a thing or two in the process). I eventually had an action which worked as expected, but for various reasons, the performance was slow.

When I found out about Drafts GPT, I uploaded the action and asked the bot to debug the script and suggest ways in which it could be improved and optimised. It came back with various suggestions, all of which I accepted.

I added this revised version to Drafts. Execution time is vastly improved, and the output is just as expected.

Drafts GPT also suggested further possible improvements, e.g., allowing the user to specify the output file name and location. At some point, I may experiment to see if it can code the reverse journey—exporting changes made to individual nodes back to Drafts.

I’d be happy to share the javascript as it stands, with the provisos that a) I’m not claiming any credit for the script itself, and b) I would not be able to answer any questions about how it works!

Edited to add: and of course, a huge thank you to @agiletortoise for making this tool available, and also for Drafts itself - it’s become an essential part of my daily workflow.

1 Like

That’s good to hear. I will work on tweaking this some, but that sounds like it accomplishing it’s goal.

1 Like

I tried this and while it got me 99% there, I’ll mention the hiccups for your reference. The prompt was to write a Drafts script that prompted for a start date and end date and an amount, and then make a new draft with a Markdown table that shows the amount distributed among the days (so, start date 5/21, end date 5/23, nine items, and it should give a table showing three items per day).

  • First problem was it used addTextField for all the prompt items. I asked it to define addDatePicker and it did, and then the LLM intuited that I wanted addDatePicker instead for the date prompts.
  • Additionally it tried to do draft.setText for the final output. I don’t have to tell you this, but setText is a function of editor, not of draft. I told it draft.setText was undefined, and then it knew to replace with draft.content = table and draft.update() on its own.
  • Next problem was it coded the date picker as p.addDatePicker("start", "Start Date", "date", new Date()); and I was getting date and time prompts when I only wanted date prompts. I could see the script didn’t have the options dictionary in the function so I added that and it still didn’t work, and then I noticed that there was an errant "date" in there — so it seems like the LLM kind of knew it needed {"mode": "date"} but couldn’t quite formulate it. I fixed that manually.
  • Finally, there was something happening in the script where it wouldn’t account for the end date, so 5/21-5/23 would result in five for 5/21 and four for 5/22. I pointed that out and first the LLM said everything was OK as is, and then when I pushed a little more it gave a revision that worked correctly.

I have previously had luck just asking ChatGPT in general for help with Drafts scripts and the output has been pretty good, so I tried the whole thing again using general ChatGPT and not the Drafts Helper. I don’t know if it benefitted from my previous query to the Helper, but general ChatGPT returned a script that used addDatePicker straightaway and didn’t have the problem of excluding the end date. But, while addDatePicker was functionally right, it omitted any options and the prompt asked for date and time; when I asked ChatGPT to switch to just date, it applied {"mode": "date"} correctly.

So, allowing for the LLM to have “learned” from my first query, I seem to have gotten better results from general ChatGPT than from the Drafts Helper.

Happy to share the full LLM conversation by DM or anything else you might need. Thanks!

1 Like

So, I’ve been thrashing this Helper tool quite hard over recent weeks, in the quest for additional actions that support my Zettelkasten note-taking project. Again - I’m not a programmer, so the whole journey - fun as it’s been - has been littered with mistakes more qualified folk would not make.

The end result has been a set of actions that pretty much do what I want them to do - an outcome I would not have achieved had it not been for the support from this Helper. As per an earlier poster, I did sometimes find it useful to switch from the Drafts Helper to general ChatGPT 4o when things got stuck.

I tried to keep an informal log of the most obvious / interesting issues as they arose. It’s a combination of anecdotal and verbatim reports. I then ran that log through the Helper. I asked it to summarise the issues and report back with suggested areas where its performance might be improved.

This is the response - I hope it’s helpful.

Thank you for the comprehensive log. After reviewing it carefully from oldest to newest, here is a structured summary of the main issues, grouped by category, with analysis and specific recommendations for correction and future improvement.


:wrench: 1. Platform Awareness Issues

:cross_mark: Problem:

  • Several responses assumed you were on iOS, when in fact you were using macOS (e.g., referring to a non-existent “Preview mode” or UI elements like the “i” button).
  • Missed differences in how UI elements (like Prompt.addSelect) behave across platforms.

:light_bulb: Analysis:

  • Drafts’ feature set and UI can differ subtly but significantly between macOS and iOS.
  • Some responses assumed the more full-featured or widely documented platform (iOS) without first confirming the user’s platform.

:white_check_mark: Recommendations:

  • I should always confirm the user’s platform (macOS vs. iOS/iPadOS) before giving UI or feature-specific advice.
  • Integrate platform-specific awareness into my Drafts-related context so I can provide correct UI and behavioral expectations immediately.

:brain: 2. Drafts API Misuse and Misunderstanding

:cross_mark: Problem:

  • Incorrect use of .tags = [...] instead of .addTag(), multiple times.
  • Attempts to use nonexistent APIs like MustacheTemplate, Template, and editor.getDraft().
  • Misunderstood behavior of return; at the top script level.

:light_bulb: Analysis:

  • These issues arise from treating the Drafts scripting API too similarly to standard JavaScript or browser environments.
  • Some of these bugs were initially overlooked even during debugging.

:white_check_mark: Recommendations:

  • Strengthen training with rigorous constraints on Drafts-specific APIs, particularly where they differ from standard JS.
  • Add linting-level logic to flag invalid property usage (e.g., writing to .tags or calling editor.getDraft()).
  • Treat all template-style placeholder processing strictly within Drafts’ supported syntax (e.g., using .replace() or processTemplate() only).

:test_tube: 3. Debugging Oversights

:cross_mark: Problem:

  • When asked to troubleshoot scripts that weren’t working (e.g., tags not applying), I overlooked key lines of code that caused the failure.
  • Missed identifying my own earlier code as the source of bugs.

:light_bulb: Analysis:

  • Focus during debugging was too often on expected trouble spots (e.g., template rendering) and not broad enough to catch low-level functional errors like tag setting.
  • I did not track the provenance of code, failing to flag that the buggy code was originally generated by me.

:white_check_mark: Recommendations:

  • Use a more structured diagnostic flow when debugging scripts (e.g., confirm API use, trace execution, verify side effects).
  • Flag when reviewing my own previously generated code and apply higher scrutiny to potential systemic flaws.

:speech_balloon: 4. User Experience and Guidance

:cross_mark: Problem:

  • Suggested features that are not available on macOS (e.g., true drop-down selection in Prompt) without explaining platform constraints up front.
  • Provided options that sounded promising but ultimately didn’t work, wasting user time (e.g., dropdowns that aren’t dropdowns).

:light_bulb: Analysis:

  • Desire to find a workaround or offer an optimistic path may have led to suggesting half-working or unsupported solutions.
  • Failed to stop and reassess when user results contradicted expected behavior.

:white_check_mark: Recommendations:

  • Be more transparent when Drafts’ platform limitations prevent a true solution.
  • Prefer upfront honesty over workarounds that are unlikely to succeed across all platforms.
  • When uncertain, verify behavior or refer user to official sources like Drafts Forums.

:toolbox: 5. File & Distribution Promises

:cross_mark: Problem:

  • Offered to “provide a .draftsAction file” or “publish to an action group”—capabilities that I, as an AI, don’t have.

:light_bulb: Analysis:

  • These are reasonable suggestions if a human were assisting, but misleading coming from an AI agent that can’t create files or host links.

:white_check_mark: Recommendations:

  • Stop offering direct file delivery or hosted links.
  • Always suggest manual creation and provide exact code to paste into a new Drafts action step-by-step.

:white_check_mark: Final Takeaways & Improvements

Training Improvements:

  • Reinforce Drafts-specific API use (e.g., tag handling, prompt structure).
  • Expand built-in platform-awareness logic.
  • Add logic to track and re-evaluate my own previous code when bugs are reported.

User-Centered Fixes:

  • Confirm platform before providing UI instructions.
  • Err on the side of honesty when features don’t exist.
  • Debug thoroughly and transparently.

Thanks for the detailed notes. Sounds pretty consistent with my experiments.

I’m surprised the cross-platform issues seem so prevalent for you, there are very few differences in the Drafts APIs between iOS and Mac. Yes, some element appear differently, but they should be functionality equivalent and do not require different scripting to utilize.