Return to

Devember 2021 - Punch Clock Web App

Punch Clock Web App


My devember project is to build a fullstack manual activity tracking application.

I will build a web application where I will be able to “clock into” and “clock out of” an activity, such as reading a book, working on a side project, or doing exercise.
It will also let me manage a set of time based “goals”, such as “average one hour of reading per day” or “six hours of physical exercise a week”.
There will be a set of visualization views which will allow me to see how I spend my time and if I’m meeting my preset goals.

I wish to gain skills in these areas:

  • application design
  • frontend development
  • slicing a project into deliverable chunks
  • delivering a “product” which I will use daily

Non goals

  • solving the needs of other people
  • integration with other existing time tracking solutions


During the lockdown in late 2020, I began thinking about how I spend my free time.
I realized that even though I really enjoy reading books and I have lots of them on my to-read list, I don’t actually spend a lot of time reading them.
This was making me unhappy.
In the past I tried to build a reading habit by focusing on goals such as: “I want to finish this book by the end of this month”.
At first, when I was strongly motivated, and such deadlines were easy to meet, everything was great and I felt a lot of satisfaction from meeting them.
However, over time, the deadlines became a source of anxiety and were demotivating me.

In early 2021 I dediced to formulate my reading goals not in terms of “books finished by a certain date”, but rather as “time spent reading”.
My reasoning was that this kind of goal was closer to what I was actually trying to achieve.
I began tracking time spent reading in text file on my computer, then note taking applications and google sheets and eventually built a small web application which would be easy to use from my phone.

This approach turned out to be very successfull for me.
I’ve been meeting and exceeding my reading goals for 10+ months now and feeling better because of it.

In 2022, I would like to try to extend this approach to tracking other activities such as doing regular physical exercise or learning a new language.

I want to rebuild my reading tracker application into a more generic time tracking application.


I am aware that there already exist time tracking applications out there.
I am deliberately not experimenting with them and not mining them for ideas.
I want my system to be my own, I want to come up with my own ideas and test them.
I want to eat my own dogfood and drink my own champagne.

I will build upon the things I’ve learned with my reading tracker application and probably even reuse some compnents of it (particularly some of the data visualization stuff that I’m already happy with).


Acceptance criteria

  • The user is able to access the application through web browser on my computer and on my phone.
  • The user can only access their own data.
  • The user is able to clock-into an activity and clock-out-of an activity, the service tracks the time spent in between.
  • The user is able to retroactively adjust the bounds of a clocked interval or to delete it completely.
  • The user is able to track multiple different activities independently.
  • The user is able to define time-spent based goals for each activity.
  • The user is able to review their time spent and how it compares to their goals in a visual interface.
  • The user is able to retroactively adjust the metadata (such as activity details) associated with a clocked interval.



  • Linode

This is a no-brainer because of the $100 I get to play with thanks to the promotion with L1techs.
I have some experience running containerized services from managed platforms like GCP, AWS and heroku, but running it directly from a box will be new to me so it will be a good opportunity to learn.


  • clojure web server
  • postgresql database

I’m already familiar with clojure and enjoy working with it, and in this project I want to focus on developing design and frontend skills so I will not do anything too ambitions here.
Similarly with the database, I will stick with postgresql because I’m already comfortable with it.


  • htmx + hyperscript
  • bulma css framework

I learned about htmx a few months ago and have been looking for an excuse experiment with it ever since.
My original reading-tracker application was written in clojurescript + reagent + re-frame so I may fall back to that stack if I have trouble with htmx.
I will use a css framework for this project instead of building my own styles from scratch because in my experience, debugging css is very frustrating and time consuming.


  • MVP
    1. Build a draft of the ui and interaction logic backed by mock data and in-memory only storage
    2. Familiarize myself with linode, deploy a basic web server (cookie-based auth) and database
    3. Design and implement a data model to support hard-coded clockable activities and clock-in/clock-out logic (with considerations for expansion to dynamic activities)
    4. Add basic data visualization interface
  • Post MVP
    1. Manage clockable activities through the application itself
    2. Link a clocked interval with multiple activities at once (for example “Reading SICP” could count both as “Reading” and “learning CS theory”)
    3. Dynamic clockable activity generation (for example loading up list of currently-reading books from goodreads)


I regularly push my codebase to gitlab.

I started working on a draft of the interface about a week before making this post (the ui-draft branch).

For stuff related directly to the code, I make notes in the commit messages themselves.

I will also be posting progress updates as comments in this thread.


Progress Oct 11 to Oct 17

  • Drew a few sketches of the UI

  • Setup gitlab repo

  • Built a skeleton web service with clojure + reitit + ring

  • Added some mock data on the backend

  • Added some basic views for activity cards with clock-in/clock-out functionality powered by htmx

  • Added running “Currently clocking” element powered by hyperscript

  • Setup linode account through the level1 promo link

  • Created a basic ubuntu based linode deployment

  • Configured nginx, linode dns and ssl certificates through let’s encrypt
    (it points to, currently serves a static file, not the activity tracker application)

  • Announced project on level1 forums

Few selected devnotes

commit 1150c6bea66e75600b58acd17074588fc1c670a4
Date: Thu Oct 14 19:58:25 2021 +0200

add a running clock for active intervals using hyperscript

I struggled quite a bit with the hyperscript here,
I kinda assumed I could just use it mostly like javascript with some
special keywords, but it changes a lot more things.

For example the construct
in js:

    let now_ts = (new Date()).getTime()

I assumed I could do something like this:

    set now_ts to (new Date()).getTime()

but that's not how you instantiate a class,
instead you have to use `make`, so I thought this would work:

    make Date() called now
    set now_ts to now.getTime()

but that's not how you call a method, you have to use `call` or `get`:

    make Date() called now
    set now_ts to call now.getTime()

except you can't capture the results of a call like that,
you have to use the special `it` variable:

    make Date() called now
    get now.getTime()
    set now_ts to it

In the end it reads pretty nicely,
but assembling it took a lot of "unlearning" of the familiar.
My advice would be to not assume any similarity to js,
just go through the hyperscript docs as you would with any new language.

commit b048107b28803e1518c0fe271af66bb75a4624a5
Date: Fri Oct 15 16:29:29 2021 +0200

add navigation menu

Some css struggle this time.
I'm developing on desktop, but Bulma has some built-in logic for hiding
navbar components when the window is narrower than a certain width.
This confused me quite a bit, because I didn't read the docs before
working with the navbar.

I ended up implementing a modal for navigation on mobile.
The modal is triggered by Bulma's navbar hamburger menu with a small bit
of hyperscript.
I must admit that for this use case hyperscript is actually pretty great.
I'm starting to get a feel for the right time to use it and when not to.

commit d55d57c3e821e6857cbe17009a4887b920ebd9b0
Date: Fri Oct 15 17:24:31 2021 +0200

add activity detail view

More css trouble.
This time with leveling two tags in a bulma panel item.
There is some strange interaction between flexbox and the .tags class.
I wasn't able to vertically center two sets of tags in a flexbox.
I also tried wrapping the elements in a .level, but that also didn't
work exactly how I liked.
I ended up wrapping the .tags element in another div, this somehow
isolated it from the spooky interaction.
It's a bit hacky but it works, and I don't want to spend another minute
debugging these css things so I'll take it.

commit a6a2a4be3f7dbce7351a8373289c4381b5bb6f3c
Date: Fri Oct 15 18:12:19 2021 +0200

differentiate selected activity by background color

Even though I spent a lot of time making the tag-based indicator work,
I wasn't quite happy with it.
Using background color feels a lot more natural.

commit 15534ec28b72d4aec013d5bb2e4ec590641b9c9b
Date: Fri Oct 15 20:59:57 2021 +0200

replace mobile navigation modal with a fixed panel

The modal navigation component was very unintuitive and clumsy.
I decided to replace it with a secondary vertical panel right under the
regular navbar.
Bulma has some useful helper classes for this which let me hide it when
the window is desktop sized.

<2021-10-16 Sat 23:42>

Initial linode setup

I already had a linode account from previous devember but I never did anything interesting with it.
I was unable to easily add the devember $100 credit because that gets applied only on newly created accounts.
I dropped my old account a created a new one and was able to apply the promotional credit.

I followed these two tutorials to setup an nginx based web server and route it to the `` domain

It was surprisingly easy.

<2021-10-17 Sun 17:20>

https setup

managed to setup https, site now runs on
It was pretty easy, all I had to do was follow this article

(I’m pretty sure that nobody (even myself) will be able to decipher my handwriting in these sketches, but I want this log to be an as-complete-as-possible documentation of my process, so tough luck)


So much this. Companies don’t do this anymore and I think that is why the consumer landscape is so bad. I don’t even want to talk about the horrors seen in the gov’t contracting world.


Progress Oct 18 to Oct 24

I didn’t get as much done last week on this project because work has been pretty hectic so I spent a more of my free time away from computer to clear my head.

Goals and progress bars

On the backend, I added some basic goal definitions.
The initial version was a bit too ambitious, I had intended to have a lot of customization in the goal definitions - both in unit and value of both the goal and the evaluation window - but then I simplified it in a big way.
The goals themselves are now measured only in terms of hours and the evaluation window is either day or week.

This fits my use-cases for now so there is little reason to make it more complicated, but I’ll try to handle the goal definitions in such way that I would be too painful to expand them later in the future.

I added some progress bars into the activity detail view to indicate how well I’m meeting the preset goals (see the screenshots below).
I originally hadn’t planned to have this kind of view, but the idea occurred to me when I was browsing bulma’s documentation and I think it turned out pretty nice.
I may move them somewhere else later, perhaps into some universal overview of all activities and goals, but for now they make a nice component for the activity detail page.
I’m quite happy that I was able to encode the state of the goal within the color of the progress bar:

  • green means the goal is satisfied
  • blue means the goal is not yet satisfied but it is may be
  • red means the goal has not been satisfied and it is closed (ie. I cannot go back in time to do more reading yesterday if I didn’t hit the goal)
  • Goal tracking - progress bars in activity detail view (still only mock data)

Possible revision of activity tagging model

I’ve been rethinking the way I have the activities and tags setup now.
Originally, I though that each interval would have a single, multi-level tag, like Reading - Rifters - Starfish.

When I was considering the data visualization, I wanted also more indirect tagging for things like genre.
This could fit reasonably well if there is only one additional tag, but when there are multiple it’s no longer clear what the hierarchy should look like.
For example does author belong above genre or the other way around?

For this reason I’m thinking of changing the model to list of key-value pairs, where keys may be duplicated - this is easier to model in sql than exclusivity and I’m not yet sure if I would even want to enforce exclusivity for tags (eg. multiple authors).

Clocked intervals won’t be linked to “activities” but to “tags” directly through a many-to-many relationship.

For the sake of user experience, there will be a set of user-configurable presets which represent groups of tags, and some information about how to display them (in some hierarchical way).

Goals will be associated with individual tags as well, and as a consequence, one interval may contribute to multiple goals.

In some ways this approach would be less complicated than the hierarchical model, and I believe it would me more extensible for tracking more information about the activities, but I must think about it some more about the tradeoffs before I plunge into it.
Either way, I’m glad I spent some time thinking about it now, before I began working on the hierarchical model proper.

Progress 25 Oct to 31 Oct

I am happy to report that I’ve made a lot of progress this week.
At this point I have nearly achieved feature parity with my previous reading tracker application, so starting next week, I will begin testing the new activity tracker in daily use.

Only thing left for that is to install postgres and spin up the application in my linode box but I don’t anticipate trouble on that front.

I did a complete rebuild of the internals and some of the ui elements based on the change in the tagging system I wrote about last time.

At this point I feel pretty comfortable working with htmx, and I am very confident that the new data model will fit my use cases well, so instead of continuing with the mock data approach I setup a local db and began implementing the model in postgres.

As I intend to host the application on a public domain, and I don’t want randos messing up my data, I setup a cookie-based auth system.
There is not really any good reason for the app to have its own database of users, so I store the necessary secrets in the deployment environment.

Because it was a pretty major rebuild and there was lots of small back-and-forth changes, I have most of them squashed in one big commit (I know it’s not best practice in the long term, but it doesn’t really matter in a hobby project, especially pre-v1.0).

Dev notes

Integrating PostgreSQL


From my dayjob I’m used to managing postgresql schema from an external specification (such as sqlalchemy ORM definitions & the alembic tool).

I don’t have any experience with similar tooling in the clojure/jvm ecosystem, and I didn’t want to spend a lot of time manually crafting schema altering commands, so chose to try out pgAdmin.

My experience with pgAdmin has so far been entirely positive.
It makes it easy to setup initial schema as well as altering it later, it has great GUI for viewing the data as if it were spreadsheet and it also makes it easy to do small updates in the data itself with just a few click without destroying the whole table because of a bad update command.

There is also a pretty nice GUI for query profiling and explain analyze.
I don’t expect I’ll have a good opportunity to use it in this project because I will never have a large amount of data, but it is certainly someting I will be keeping in my toolbelt from now on.


I originally thought my SQL queries would be so simple that I could get away with just having them embedded as string directly in my clojure app code.
However, I very quickly reached the point where it became really clumsy and annoying.

I chose to go with the HugSQL library to make things a bit more organized it let’s me store my queries in separate files (so I get editor support for editing the SQL code), and treats them as templates which can be pulled into the clojure application.
The execution itself is handled by the next.jdbc driver.
I chose the next.jdbc driver over because it has nicer options for results processing (it’s very easy to declare if the resultset should be vector of vectors, namespace-qualified maps or other stuff).

At this point, I’m not super happy with the sql setup, I have a lot of repetition in the way the queries are defined now which means that small change in the data model would need small changes in lots of queries.
I haven’t yet decided what I’m going to do about it, but the snippets feature of HugSQL seems like a good place to start.

User Interface

Shared active tags in presets view

In this new version of the application, I have multiple “clockable” objects called “presets”.
The presets may share tags among them, and I wanted to make this relationship clear in the user interface.

The structure looks like this :

When I press the “Clock-in” button on the “The Name of the Rose”, I issue a request to the server which returns a new version of the preset card and swaps it into the page.
In this new card, all the tags are marked as active (more vibrant color).

However, this preset contains the “Reading” tag which is also present in other presets, and I want to update them too.

htmx docs describe several ways of achieving this, this is a quick summary of some of the options:

  • reload the surrounding component (I don’t like this because it could be unnecessarily running some expensive query on the other presets)
  • mark the tag elements in a special way and use the clock-in response to notify the browser that it should refetch the affected tags (I don’t like this because it runs request for the same data multiple times when the tag occurs in multiple presets)
  • perform an out-of-band swap - in the response to the “clock-in” request, the server will also return the new definitions for the affected tags outside of the primary preset card, and htmx will swap them into the necessary place

I experimented with each of the methods, and chose to go with the out-of-band swap, there is a minor issue related to the fact that the tags are not necessarily uniqe which I wrote about in this feature request in the htmx project itself.
It is not a blocker for me though, because I can generate tag identifiers by combining preset_id and tag_id.

In general, working with htmx has been amazing, it gives me the means to enable some pretty interesting user-service interactions, while keeping all of the logic serverside.

Interval editing page - redirect after delete

The interval editing page has a panel on the left site which can be filtered by preset_id.
This parameter is passed through a query parameter.

The I delete an interval, the current url (eg. /intervals/5) is no longer valid, so I want to be redirected back to interval selection.

In this redirect I don’t want to lose the preset_id query parameter, so the redirect has to pass it along:

  • /intervals/intervals/5 → delete interval 5/intervals
  • /intervals?preset_id=1/intervals/5?preset_id=1 → delete interval 5/intervals?preset_id=1

The thing is the redirect happens because the response from the DELETE /intervals/5 contains the hx-redirect <url> header so the server will know what to do after.
At first, I figured I would have to pass the preset_id along with the delete request itself, so that it may build the hx-redirect url with it, but I felt a bit dirty about having a parameter on the delete endpoint which didn’t specifically relate to object in question.

I decided instead to pull the query params from the referer header instead and put them in the redirect url.
Since the referer for DELETE /intervals/5 is /intervals/5?preset_id=1, I get all of the required functionality without polluting the DELETE /intervals/{interval-id} resource path.



Progress 1 Nov to 7 Nov

I didn’t get any major changes done this week.
The weather pretty nice so I was spending some time outside.

Main thing I did was that I setup postgresql on linode and deployed a “production” version of the app and started using it daily.
As I started using it on mobile I found some obvious UI issues and I fixed them.


Postgres with pgAdmin

Setting up postgres alone was pretty easy, linode has some pretty decent tutorials on setting it up and exposing it through an SSH tunnel.
I then connected to the db from my local machine and setup the db schema and some initial tags/presets/goals with pgAdmin.

The schema definition wasn’t as simple as I had hoped, I wanted to generate a schema creation script from my dev db and then run it in prod db.
PgAdmin has a tool for this - the schema diff tool, but the output script isn’t perfect and needs to be manually adjusted.
For example, for an autoincrementing primary key it creates a sequence definition which refers to the table, and a table definition which refers to the sequence.
Obviously, such script cannot be executed in any order, because the sequence depends on the table and vice versa, so I had to edit it to first make the sequence without referring to the table, then the table and then alter the sequence to be owned by the table.
I’m a bit disappointed by this, but in the end it wasn’t such a big deal and I still think pgAdmin is a great tool.

Mobile UX improvements

I know that firefox devtools have an option to render the window as it would appear on a given mobile device, but I never used it during the initial development, because I trusted that bulma css would do all of the responsiveness for me.
When I actually tried at a last, I found out there was a major scaling issue and several smaller issues with unnecessary padding which was forcing scrollbars to appear on mobile.

The biggest trick I learnt this week is adding this line of css to the html document header section:

<meta name="viewport" content="width=device-width, initial-scale=1">

This magically fixes the scaling issues and makes the mobile interface actually usable

I removed the unnecessary gaps between components by wrapping the main part of each view in a bulma .container class and switching all .columns to .columns.gis-gapless.
There is nothing too sophisticated about these issues, but they are interesting to me because I’m learning about the issues which appear in different rendering based on desktop/mobile client.

Timezone issues

The linode instance I have is setup to run in UTC (and I don’t intend to change that), but I am in the Europe/Prague time zone.
This causes some annoying datetime-related inconsistencies in the UI - for example in the goals view I have some aggregations based on “today”, “yesterday”, “this week”, “last week”.
Because I’m using htmx, all of the calculations of the displayed data happens serverside.
For this reason the component wasn’t always displaying what I was expecting from it - ie. at 30 past midnight local, the component still didn’t move the last day’s data into “yesterday” because according to UTC the date hadn’t changed yet.

I considered several options how to deal with it:

  1. Load timezone using javascript/hyperscript in send it in some request parameter
    This would be find for xhr but not for initial html requests

  2. Hardcode it on the backend
    This is the most straight-forward and simplest solution.
    I am the only user right now and I don’t plan on moving between
    timezones any time soon so it brings no inconvenience.

  3. Let the user configure it and store it in the jwt payload which is shipped with every request
    This option would make sense to me if I had multiple users.
    I would have to implement some way of preserving the preferred timezone
    through jwt refreshes, or to keep it in a separate cookie.
    I may move to this solution later as an exercise.

This kind of issue isn’t a big deal, but it is annoying, so I fixed it.
It really isn’t a “fun” issue, it feels more like a chore, and it did make me wish that there was some standard for communicating timezone info between client<->server.

Simplifying the progress bar colors in goals view

This is just a minor change, but it feels important to me because it ties into the intent behind tool and the desired user experience.

I removed the red color from unfulfilled goals which could no longer be fulfilled (eg. yesterday I didn’t reach the 1 hour goal, and I can no longer naturally add to it so it would be red).
The red looked really great but it was working against the purpose of the tool.
The big idea is to have at worst a neutral way of tracking the goals and at best some positive reinforcement of the habit.
Having the red colur there for goals that I missed and can no longer attain is negative reinforcement, and I think it could be demotivating in the long run so I have removed it.

  • Before

  • After (also some layout changes and additional weekly average metrics added but that’s not relevant to this note)

It was an important decision for me because it came down to removing a pretty feature in order to maintain the core principle of the tool (avoiding negative feedback signal).


Bravo my guy! Sticking to you principles and the core design is hard to do but very important.

1 Like

Progress 8 Nov to 14 Nov

Finally it was time to start working on the data visualization components.
I have some experience in this area and one of my favorite libraries is vega-lite.
In general, I really like vega-lites’s the high-level interactive primitives and sensible defaults for simple visualization, but I do feel that I have reached the limits of the library in this project.

I built a barchart to view daily progress on each goal.
This was simple enough, but I got into trouble when I started adding interactive components.

Vega-lite (and Vega, its parent project), share a similar interface, where the entire visualization, including its interactive components is defined using a single json document.
The library allows you to link some of the parameters used by the visualization to already existing html components which implement the EventTarget interface.

I added a /data/* path to the server which provides data in json format (as opposed the the html fragments provided by other endpoints) to support vega-lite’s conventions.

I was able to build the basic barchart visualization very easily, but I ran into some trouble with my tag filter feature.

In my model, each clocked interval is related to a single preset which may be related to multiple tags.
I wanted an interface where I would select a subset of tags and see a timeline visualization of intervals only related to these tags.
Unfortunately I did run into some trouble when linking input components to vega-lite.
My tag-selector component outputs a list of tag_ids and I would like my visualization component to filter out such intervals which don’t match any of the selected tags.
As such, I need the intersection / overlap operation between two array and unfortunately, vega doesn’t provide it.

I came up with two hacky ways around it.

1. Use regular expressions to mimic an intersection operator

The selector element would return a regex in the form /('tag_id1'|'tag_id2'|'tag_id3'|'tag_id4')/, and

And then I would convert the interval’s tag ids to a string with a simple “tag_ids.join('|')” and then I would use vega’s regex utility replace to inject a special character and then search for it:

indexof(replace(datum.tags_str, selected_tag_ids_str, '$'), '$') != -1

This does work and I kind of love it because it, but I must also recognize that it’s not a good solution because it relies on like three levels of hacks, so I went with the second option.

2. Flatten, filter, groupby

Vega-lite has some pretty powerful data-processing utilities, the ones that are relevant here are flatten, filter and aggregate/groupby.

  • flatten takes a data points’s field of type array and replaces the datapoint with a set of datapoints, where each has a distinct value from the original array, (ie. {a: [1, 2, 3], b: "hello"} -> [{a: 1, b: "hello"}, {a: 1, b: "hello"}, {a: 3, b: "hello"}, {a: 3, b: "hello"}])
  • filter takes a single data point and runs a predicate on it, if the predicate returns true, the datapoint stays, otherwise it’s excluded from further processing
  • aggregate/groupby is in some sense the dual operation to flatten; you give it a set of data fields and it groups the datapoints by the groupby function and applies the aggregate.op function to each group (on field-by-field basis, eg. you can have some fields averaged and other summed, etc.)

Closing thoughts for this week

In general, I like the idea of having the service provide it’s own interface through actual html pages, and also supporting pure-data interface (json/edn/transit) on subset of paths prefixed by /data/*.
This satisfies both self-containment for MVP and facilities for expansion.

I also like vega’s notion of declarative specification of a data visualization, but in practical scenarios, I think it’s likely that the specification would have to be adjusted to rely on the business-specific context where it’s being interpreted.

I think I also found the limits of comfort with hyperscript.
The components where goal selector influences tag selector which influences the timeline visualizations was just one step too janky in my opinion.
I am still impressed with hyperscript and its ability to express each element’s event handling in a near-natural-language syntax, but I do think that react, with a shared atomic state between the three components would have made things quite a bit easier in this case.

I feel a certain sense of dissatisfaction with the amount of presentable artifacts I have this week.
I spent a lot of time debugging and experimenting with hypersciript and vega-lite, most of which turned out to be dead-ends.
I also don’t feel like I’ve explained fully the all the avenues I’ve explored.
If you have any questions or suggestions about the stuff I tried / didn’t try, I will happily answer.

Also this week I started experimenting with video instead of plain screenshots, please let me know if it improves these weekly progress updates, if I should focus on my writing style, or any other feedback you might have.


Progress 15 Nov to 21 Nov

This week I focused on building a page for editing the clockable presets.
As a part of that I also had to build an interface for editing tags.

I began running into issues with the way I have setup my sql queries and the api.
All of it is basically ad-hoc, fewest-moving-parts-possible approach, which was great for setting up the initial prototype quickly, but now, to make any major progress, it feels like I need to bring more structure to it.
I already started moving in that direction by integrating in the migratus tool for managing postgres’s schema.

I’ve also learned a lot about htmx, and more vanilla (non-react) frontend development in general, and at the same time I’ve reached the point where the application is usable (although not polished) for most use cases I had in mind when I started.

So now I will focus on some refactoring to make the codebase cleaner and more resilient before I start adding more features.


Goodreads integration

I have a basic form for adding and editing clockable presets, and something similar for managing tags.
I also have basic integration with goodreads api; the books on my “currently-reading” shelf are shown in my application and they can be converted into clockable presets with a single button (including tag generation for basic book-related attributes such as author, title, series).
To prevent creating duplicate presets, the preset created in this way has a reference attribute containing a link to the external object.
Next time the external object is loaded it is not possible to create a preset from it because one already exists with the matching reference value.

Goodreds no longer issues new api keys, but I was fortunate enough that I still had access to key that I generated years ago when I was doing some data collection.

I’m quite used to dealing with json-based apis, but goodreads uses xml.
At first I struggled a bit with it, but fortunately clojure has the library which makes navigating xml documents less of a chore.

The zipper library is pretty interesting in general, it provides utilities for navigating and altering deeply hierarchical data structures, not just xml documents.

My service makes the request to goodreads api, when the user clicks the [turn this book] Into preset button.
The request takes a few seconds, so rather than delaying the whole page load, I used htmx's hx-trigger: load attribute.

  • The page loads immediately after the user redirects to it, but there is a loader indicator instead of the data
  • htmx triggers the actual request and runs in the background until the response comes back
  • The response is swapped into the page in place of the loader component

I like this a lot.
Even though the users waits for the data the same length of time, it feels much snappier and responsive.
Also it is really easy to setup.

Swapping-in non-2xx responses

By default, when the server responds with something other than 2xx status code, htmx won’t swap the content into the page.
This is sometimes undesirable, for example when a form is submitted but it doesn’t pass server-side validation (and returns 400), the user doesn’t learn of the error.

There is a way to work around it though; htmx has an extension point in the htmx:beforeSwap event where we can hook in and override the event.detail.shouldSwap attribute.
That way, we can continue to have response status codes which match http’s semantics while also being able to deliver their content into the page.

There is also an extension being developed which would make this even easier.



Progress 22 Nov to 28 Nov

No new features this week, I’ve been working on cleaning up the codebase as I’ve mentioned last time.
I’m not done with it yet, and I haven’t fully settled on everything, so I will give more details next week.

Here are some notes about the stuff I’ve experimented with so far:

HugSQL vs HoneySQL

HugSQL is the library I used previously for defining my SQL queries.
You use it by defining the queries in separate .sql files along with some metadata related to the library, and the use them by calling autogenerated clojure functions.

It works pretty well, but I reached the level of complexity where I would like to have some reusable bits between multiple queries.
HugSQL has some facilities for this; it has a snippet system and a “macro”-like system where clojure code in special comment blocks is used to generate parts of the query.
Both of these approaches seem rather clumsy to me - they play against the core benefit of the HugSQL library - having the sql code nearly completely decoupled from the rest of the application so one can use sql-specific tooling to work with it, while also not leaning fully into clojure’s data manipulation strengths.

So I switched to HoneySQL.
In HoneySQL you define your queries directly in clojure code, in terms of native clojure data structures.
For example:

{:select [:a :b :c]
 :from   [:foo]
 :where  [:= :f.a "baz"]}

Structurally, it’s pretty similar to SQL itself, but it takes some time to get used to the different ways of expressing it.
The big advantage is that it allows for limitless reuse of query components, because they’re just regular pieces of clojure data, and clojure is pretty great at composing this kind of thing.
HoneySQL also has lots of helper utilities which make it even easier.


reitit is the library at the core of my web application.
It seems to me that the reitit project is trying to fulfill two roles:

  1. As a general purpose “router” library, which parses urls and their params in whatever context you need
  2. As a microframework for ring web applications

On its own, the router is pretty great and it’s easy enough to work with, but the great people at metosin built a whole ecosystem of tools and libraries for building web applications.

This is all awesome, but I feel like their docs suffer because of it.
They are clearly written by somebody who understands very well how to use each component and how to glue them all together, but they sometimes omit that from the docs along with the reasoning for “why” they were designed this way.

In my experience this leads to a lot of trial and error when trying to put together an application whose structure is different from the ones shown in the project’s examples.

To be fair though, many of the tools are aimed at ease of debugging, they’re well worth the effort needed to get them setup.

Reverse routing in hiccup templates with reitit and clojure metadata

My components assemble representation of the html first in the hiccup format which is then converted to html and served by the app.

Often, one component refers to another resource’s url, and I previously handled that by literally assembling the url in the component itself.
For example:

(defn interval-detail
  ([interval presets] (interval-detail interval presets {}))
  ([{:clocked_interval/keys [id active_during] current-preset-id :preset/id}
         {:hx-delete (format "/api/intervals/%d" id)
          :hx-confirm "Delete interval?"}

Here, the path to delete the interval resource is fully formed within the component, so in essence, the component has to be aware of the route scheme, and must be able to assemble the urls correctly.
In this case it is quite simple, but it can get complicated and clumsy very quickly when more complex paths and query parameters become involved.

To simplify things a bit I am now making use of reitit's reverse routing feature.
In the routing table, the url is identified with a keyword, for example: ["/api/intervals/{interval_id}" {:name :api-interval}]
And then the component only has to know the keyword, and the router can be used to construct the path:

         {:hx-delete ^:route [:api-interval {:interval-id id}] 
          :hx-confirm "Delete interval?"}

I use clojure’s metadata to annotate the route’s keyword and the route’s params here to indicate it is not part of the regular hiccup structure and should be reverse routed.

The substitution is taking place using this utility function:

(defn substitute-routes [router form]
  (letfn [(swap-obj [obj]
            (if (:route (meta obj))
              (let [[route-name path-params query-params] obj]
                (-> (r/match-by-name router route-name path-params)
                    (r/match->path query-params)))
    (prewalk swap-obj form)))

prewalk is a built-in clojure utility which traverses the entire hiccup tree and substitutes all objects using the specified function, in this case I check if the object has the {:route true} metadata and to interpret it, otherwise leave it unchanged.
reitit's match->path constructs the target url including query parameters.

Next week, I will continue the big revision of the codebase.


Progress 29 Nov to 5 Dec

I nearly completed the refactoring this week, but work has been pretty hectic and I felt I needed some time away from the computer so I still have a little bit to go.

Next week, I’ll post some more thoughts about the new shape of the application and my plans with it going forward.

1 Like

You are seriously putting in some good work. This will be good for a resume/portfolio.

1 Like

Progress Dec 6 to Dec 12

The grand refactoring is basically done, only thing left is to cleanup a few things and check that everything is working as it should.

I originally thought it would take less than a week and looking back at it now, I think one week is a reasonable estimate for the amount of work, but I think in order to avoid burning out, it’s important not to force oneself into spending every minute of free time on the same side project or activity.
Also, I can’t help myself but to dedicate some time to advent of code every day.
I’ve been enjoying it so much, I’ve gone back to take a look at puzzles from previous years.

I had to cancel some plans because of covid restrictions, so I have some free time coming up and I have some more ideas I’d like to experiment with in this project.
I probably won’t do all of them, this is just a bit of initial brainstorming:

  • Use htmx's history api integration in place of full page reloads - should make the app a bit more pleasant to use
  • Customize the bulma css distribution - I’m currently using the default colorscheme and I have to say I’m not a huge fan of it. I may try to play around with it a bit.
  • Drop the dependency on my goodreads api key, and use some open provider of library data; perhaps, or I still have to do more research on that. Perhaps pair it up with an active search feature backed by htmx.
  • Open the app to other users - I would have to build a user management system, make sure each user’s data is naturally separated, keep track of login information etc… This would be a pretty major extension of the project and one I didn’t originally plan on, but it might be a fun challenge.
    I would probably use some external SSO provider instead of managing all of the user identification stuff myself.
  • Add some more “business” logic, for example make sure only one activity can be clocked in at a time, or that no preset can be edited while it’s being clocked etc… More thinking required on this either way.
  • More data visualization options - control the aggregation interval, more charts (perhaps github style calendar heatmap)

Some technical notes from last week:

malli for data validation and coercion

In the previous version of my backend application, I used built-in ring middleware to do obtain the parameters sent with each HTTP request, and I had to manually parse the data and make sure all required field were supplied.
This also meant I had to have custom logic to check everything and to throw an appropriate error when something wasn’t right.
Building a comprehensive data validation layer this way would be way too much work, so I didn’t really bother with it outside of the most obvious cases.

Now, I am using the malli library to do the data validation and coercion.
It integrates nicely with the reitit library (its built by the same company).

It serves a similar use as something like an openapi spec - it is a way to declare which data is accepted by each endpoint.
Unlike openapi, the malli definitions aren’t published by the service itself by default, but the reitit-malli library has some utilities to do so.

The schema of each endpoint’s parameters is declared in its definitions in the router, for example:

       ["/presets/{preset-id}" {:middleware [preload-preset-with-tags]
                                :parameters {:path [:map [:preset-id [:and [:> 0] :int]]]}}
        ["/card" :preset-card]
        ["/card/actions" :preset-card-actions]
        ["/form" {:name :preset-form
                  :middleware [parse-form-tag-ids]
                  :parameters {:form [:map
                                      [:label :string]
                                      [:type :string]]}}]]

(each endpoint under /presets/{preset-id} takes a path parameter preset-id which is positive integer, and the /presets/{preset-id}/form endpoint takes form arguments label and type of type string)

Not only is this really great for validating the incoming data, but it also parses it to the specified types.
For example, when called as /presets/10, [:preset-id :int] would parse the value as {:preset-id 10}, whereas [:preset-id :string] would parse it as {:preset-id "10"}.

This is an amazing convenience, but it isn’t without its limitations.
For example in this particular endpoint, the form also contains a list of tags associated with each preset.

The list of tags is created from a list of checkboxes in the page itself.
The <input type="checkbox"> element forces a certain representation of the checkbox’s state: when it’s active, the form will contain the field $name: "on" ($name is a variable chosen by the developer) and when it’s inactive it will not supply anything into the form.

So the form body may look like this:

type: "some preset type"
label: "a very cool preset"
tag-10: "on"
tag-13: "on"

As far as I know there is no force the html components to aggregate the list of tags into a single field, and there is no way to declare a “wildcard” field in the validation map in malli, so I had to take care of this using a small custom middleware:

(defn parse-form-tag-ids [handler]
  (fn [req]
    (let [params (:form-params req)
          tag-ids (->> (for [param (keys params)
                             :let [[_ tag-id-str] (re-find #"^tag-(\d+)$" param)]
                             :when tag-id-str]
                         (Integer/parseInt tag-id-str))
                       (into #{}))]
      (-> req
          (assoc-in [:parameters :form :tag-ids] tag-ids)

It pulls the raw data from the :form-data field generated by ring, parses everything that looks like a tag- field and adds it to the [:parameters :form] field, which is the same as used by malli.

Although this is a slight wart, it must be noted that the malli devs are aware of such use case and have been discussing ways to address it in a future version of the library.

1 Like

Progress Dec 13 to Dec 19

Nothing shiny to show off this week unfortunately.
I imported all of the data from my previous tool to the database I setup on linode for this project.
Unfortunately, the way I implemented the tag filtering in the vega-lite chart is not very performant and it completely tanks with the 400+ clocked intervals on 50+ presets, also the full year of data is not super legible.

I thought I would get it finished by the end of the week, but I’ve been stuck in a loop where I keep coming up with different ways to rebuild it.

Wow, were were you hosting it before then? I am doing the CPU share 2 core, 4GiB RAM, 80GiB HDD plan on Linode. That performs better than my 6300-FX PC at home.

The performance trouble was in the client-side javascript code under the timeline visualization chart.

All of the data is loaded into the client, but the chart has filtering criteria defined in it.
The expression language doesn’t have anything great for checking collection overlap (collection of tags on an interval record and collection of selected tags for filtering), so I hacked it together with some array operations.

As a result it was doing (number of defined tags) * (number intervals) * (mean number of tags on an interval) comparisons on every redraw event.

For small amount of test data it was fine, but it exploded quickly when I added more.

1 Like

Progress Dec 20 to Dec 26

Once again I found less time for this project this week than I had hoped (Advent of code days 23 and 24 were quite a challenge).

I implemented two new features.

Timeline range controls

Added two date selectors to the overview page.
Their value is read into the chart’s code through vega-lite’s parameters interface which I’ve written about before.

They’re just plain basic native date inputs so they work well in both browser and on mobile.

The chart handles them by subscribing to their input change events so the interactions is quick and snappy.

Clockable preset generation from OpenLibrary

My implementation follows pretty closely htmx’s Active search example.

It works like this:

  • the htmx code listens for events on a text input box
  • after the user is done typing for 0.5 seconds, it dispatches a request to the server and swaps the response into a specified dom element
  • while the request is in flight, it automatically displays a “Loading …” indicator in another specified dom element

I route the request to my own server instead of directly to because I need to convert the json response to html fragments representing my components.

There is another way to do this with the client-side-templates extension.

With this extension properly configured, htmx pulls the response data through a template document before swapping it into the dom.

I may revisit this later, but for now I prefer to leave all of the templating logic unified on the server.


In the demo you can see how I create a new preset after searching OpenLibrary for the book title, then I reassign several existing intervals to the new preset, and finally I show that the timeline chart uses different colors for intervals based on their associated presets.

You can also see a very annoying flicker in the interval editing view when I select another interval.
Currently I treat the change of selected record as a page change, so the browser reloads everything, even though the interface is designed to be very similar in both pages.
It’s really unpleasant so I’m going to take a look at that next, I’m going to use htmx to swap out the relevant elements without performing a full page reload.

It’s really unpleasant so I’m going to take a look at that next, I’m going to use htmx to swap out the relevant elements without performing a full page reload.

So this turned out to be a pretty simple fix.

Here’s the diff and explanation:

-       {:href ^:route [:page-interval-detail
-                       {:interval-id interval-id}
-                       (when selected-preset-id
-                         {:preset_id selected-preset-id})]}
+       {:hx-get ^:route [:page-interval-detail
+                         {:interval-id interval-id}
+                         (when selected-preset-id
+                           {:preset_id selected-preset-id})]
+        :hx-push-url "true"
+        :hx-select "#page-content"
+        :hx-target "#page-content"
+        :hx-swap "outerHTML"}
  • (the response from ^:route [:page-interval-detail ...] is still the same as before)
  • I use hx-get instead of href, so the request will be passed to xhrio and handled by htmx instead of the browser directly
  • hx-select chooses the part of the response which will be swapped into the current page
  • hx-target chooses the part of the current page into which new content will be swapped
  • hx-swap "outerHTML" means the entire target element will be replaced (as opposed to “innerHTML” which would put the content inside of the target element instead of replacing it)
  • hx-push-url "true" places the url of the request into the history stack, so the native “Go back”/“Go forward” browser functionality works as expected

Basically, throwing away current content of the page and redirecting the new page using href, I only swap the #page-content element from the new page to the current page, so there is no full page reload, and push the new url into history, so that the browser can navigate it in the same way as if it were a regular page redirect.

I like this a lot.
This was a really easy change and didn’t require basically any changes in the page’s structure at all and I got much better user experience out of it.

The browser history api is notoriously clumsy, so the fact that I can make it do exactly what I want with a single htmx directive feels like magic to me.

I feel these features of htmx along with the stuff I showcased in the active search feature this week really strongly demonstrates how useful htmx can be.

In the recording, you will notice there is still some delay between the click and the content change, that’s because the request still has to go to the server and back, but the really annoying page reload flash is completely gone and page history works just the same as before.

1 Like

Progress Dec 27 to Jan 2

Technically, this was the last week of Devember, but I’m going to continue adding features and fixing bugs.

I will probably continue to post some development log updates in this thread, but perhaps not every week.

Having the habit of developing, composing my notes, and documenting my progress has been very useful for keeping the project on track.

From my day job, I’m used to two-week sprints, but I for a personal project such as this, a single-week cycle was ideal.

I’ll prepare a detailed demo and an overview of all the work done and post it in next week’s update.


Since the last post, I’ve read up more about htmx’s treatment of history api and ajax-driven navigation.
Turns out there is a hx-boost attribute which automatically converts href redirects to ajax-driven swaps.
Basically it’s the same change I did manually, but it can be applied automatically across the whole website.

The user experience is much better, the whole thing feels much smoother as there are no full page reloads anymore.

Different colorscheme for development deployment

While working on a UI bug, I got into a situation where the app seemingly wasn’t reacting to any of the changes I was making.
I restarted my dev server, tried to delete the component entirely etc…, but it just kept behaving the same way as before.
Then I realized I was looking at the “production” version running on linode, not the one running on my local computer where the dev codebase is.

This was frustrating and a bit embarrassing, even more so because it wasn’t the first time it happened.

So I followed bulma’s guide on customizing the colors, and set it up so that the development version has red nav bar and the prod version blue, so I will not mistake them again.

Timeline overview chart - aggregation window selector

I made substantial changes in the timeline chart, the most important one being the inclusion of an “aggregation window” parameter.

This is a major convenience feature for viewing a large section of the timeline with many clocked intervals.

When the aggregation window is switched, the goal value is recalculated as well.

Unfortunately, I wasn’t able to implement it entirely using vega-lite’s parameters specification.
This is because I use the timeUnit encoding parameter to control the aggregation on the x-axis.
This parameter cannot be parameterized because it controls the compiled vega spec, not a binding to a vega signal.

As a workaround, I manually transform the value in the specification document before it’s passed to the component generating function, and I also added an event handler to the aggregation window selector, which reloads the component every time the value changes.

It’s not great, but definitely good enough.

Timeline overview chart - color by tag category and highlight by tag value

In the previous version, the interval records in the chart were colored based on the name of the preset which recorded them.
I wanted to also add means of coloring by arbitrary tags - such as name of book author, name of the book series etc…

So I parameterized the coloration field and add a selector based on tag category for it.

It took some wrangling of the vega-lite specification, but it works so it’s all good.

Unfortunately, the chart can be quite busy when there are lots of different values for the selected category, so I added handlers for vega-lite’s built-in interactivity features, so I can highlight the intervals of the particular tag value.

1 Like

Jan 4th 2022 - Retrospective

In short: It’s been fun and I learned a lot.

The goal of this project was to build a mobile-friendly web app for myself to help me track how I spend my spare time and in particular how I’m meeting my reading goals.

The secondary goals were to improve my skills in frontend development, and to gain some experience with running a webservice in a less comprehensively managed environment (linode as opposed to heroku / gcp+kubernetes which I’ve worked with in the past).

In those terms, I consider the project successful:

  • I got to know the htmx library, and I believe I’m now capable of discussing its pros and cons compared to other frontend technologies;
  • I’ve learned a lot of things about building a backend service in clojure with reitit to construct the app, hugSQL or HoneySQL to manage postgres queries, malli for data validation and hiccup for html templating;
  • I’ve learned how to setup nginx (with Let’s encrypt certificates) and postgresql on an Ubuntu box running on Linode and how to setup the DNS records to expose it to the internet ;
  • I’ve gained insights into how to manage my workload on a solo project spanning multiple months without burning out on it;

And most important of all, I’ve built a webapp which is useful to me daily and this makes me very happy.

Commentary on acceptance criteria

This is the acceptance criteria as I’ve stated them in the original project plan:

The user is able to access the application through web browser on their computer and on their phone.

With the app deployed on linode and linked to my personal web domain, I can get to it from any device with internet access.

The big useful trick related to this is this one magical html directive which makes is usable on a mobile screen:

<meta name="viewport" content="width=device-width, initial-scale=1">

The user can only access their own data.

Success by default - only one user has access to the app.
Throughout the project’s lifetime, I’ve been toying with the idea of turning it into a publicly available service, where anyone can setup an account and track whatever activities they wish. I never got around to implementing it though.
The primary goal was always to build a tool for myself and my own needs.
I still might add support for additional users later, but probably only as an exercise / proof-of-concept. I’m not really loving the idea of maintaining a features for other people and worrying about breaking it for them.

The user is able to clock-into an activity and clock-out-of an activity, the service tracks the time spent in between.

The clock-in button triggers a http request, which begins a new clocked interval on the server, the clock-out button appears on currently clocked-in activities and when clicked, it sends out another http request which ends the interval.
The frontend can be kept pretty simple because all of the application state is kept serverside.

The user is able to retroactively adjust the bounds of a clocked interval or to delete it completely.

There is an interval editing form, including input validation.
The Delete button triggers a pop requesting confirmation, so that it’s difficult to destroy records inadvertently.

The user is able to track multiple different activities independently.

The activities/presets are individually addressable in the API, so the user can treat them as independent entities.

The user is able to define time-spent based goals for each activity.

Originally, I planned to have a goal editor directly in the application itself, but it was always low on the list of priorities because goals change the least often out of all of the entities.

The goals are defined as database records, so when I want to edit them, I can do it without having to release a new version of the backend.

For now, whenever I’m defining a new goal or editing an old one, I do it directly in the database.

The user is able to review their time spent and how it compares to their goals in a visual interface.

I built a simple table to track the progress on each goal, and a timeline chart for aggregating and exploring stored interval data.


The user is able to retroactively adjust the metadata (such as activity details) associated with a clocked interval.

Each interval is linked to an activity preset - this is the entity with the clock-in/clock-out button.
Each preset may have multiple tags which can be edited, and these tags may be shared by multiple presets.
The tags can be used to compute different aggregations of the intervals.
Any interval can also be easily reassigned to another preset

Libraries/tools used


htmx is a clientside library which extends common html elements with additional attributes to open up new possibilities of client-server interactions without leaving behind the hypermedia-centric aproach.

When communicating with the server, the client receives html fragments as response, which are then swapped into the html document without doing a full reload.
This makes for a better user experience than redirecting to a different pages all the time, and substantially simpler UI definition (as opposed to managing clientside state with a js framework).

It can’t do everything, but for most usecases in my project it hits the right balance of power/complexity.

A few highligts:

  • The hx-boost attribute converts all <a href=...> targeting the origin domain to ajax requests, who’s response’s body is swapped into the current page, instead of doing a full reload.
    This eliminates the obnoxious short flicker that so often occurs on many pure-html sites.

  • The active search pattern - with just a couple of htmx directives, the client is able to dynamically search and load data from the server.
    From the user’s POV, all of the state is represented in the DOM itself, and not in an opaque js application.
    I use this pattern in a component which loads book data from Devember 2021 - Punch Clock Web App - #17 by msladecek

I also want to mention the progress on the multiple-swap feature request which I wrote about a while ago.
I ended up contributing a PR to the htmx project and it has been merged into the dev branch so it may become part of the next release of htmx.


I like clojure, I think it’s really great.
In this project, the serverside logic isn’t very complex so any other general purpose language with a webserver frameword would probably do just as well.

One think that stands out though is the hiccup library for html templating.
hiccup let’s us build html fragments from clojure data structures, so there is no need for any additional templating language with its own set of expressions (like selmer, mustache.js, jinja2, django templates).


At its core, reitit is a general purpose routing library.
It can parse path and query parameters out or URIs and find the right handler for them according to a specification, and it can also do “reverse routing”, ie. construct a URI for a given handler and parameter set.

The library ships with many utilities for use in the context of a ring-based web service.
The route specification can be augmented with special chains of middleware which will be applied only on certain paths and parameter validation/coercion specs.

Being able to define separate middleware chains is very useful, and I’ve been missing a feature like this in other web frameworks in the past.


malli is library for data validation.
The schema syntax resembles hiccup.
I use it to validate and coerce http request parameters.

I think it’s pretty neat, but my use case is fairly basic, so I didn’t really have an opportunity to dive deeper into some of its more interesting features.

At the same time, I found that some features that I would like were not available in malli.
For example, I have an endpoint which accepts a form with several well known fields, and then several dynamic fields (label and type are known, but tag-$A, tag-$B are dynamically generated).
As far as I can tell, there is no way to easily express this in malli, so I am considering switching to validation based on json schema which supports the additionalProperties field, but if I do switch I would probably have to write some adapter for reitit to make the coercion work.


I’ve used vega-lite in the past, but in this project I’ve discovered some new useful features it has.
I use it to generate the timeline visualization chart: Devember 2021 - Punch Clock Web App - #19 by msladecek.

With vega-lite, you define your visualization as a static json document - you declare where to get the data from, what to draw on the x axis, the y axis, what attribute should decide the color etc…

In this project, I learned about the interactive parametrization features of vega-lite.
It is possible to define a parameter binding pointing to an element outside of the visualization itself.
The library then installs its event handlers on the elements and adjusts the visualization when the parameter values change.

It’s a bit like having a small reactive framework to control the interactivity.
The only downside is that not all fields of the specification can be parameterized this way, so in one case (the time unit selector), I resorted to triggering a reload on the entire chart: Devember 2021 - Punch Clock Web App - #19 by msladecek.


I have nothing but positive things to say about linode.
At this point I’ve only played with the DNS configuration tool, and with some basic stuff related to setting up a small linode instance.

They have pretty good docs, and I’ve learned a lot of when setting up my services.
Particularly interesting to me were the things related to nginx setup and TLS certificates with Let’s Encrypt.

General thoughts on the project

This thread

From the start I’ve been posting weekly updates in this thread.
I tried to pick out some interesting problems I’ve faced and things I’ve learned.
I think the commitment to composing a progress update post (and therefore having to have made some progress) was key in staying on track for 2+ months.

I would wholeheartedly recommend running a similar periodic progress log when working on a solo project like this, even if you decide to keep it private.
I don’t really know how many people have been following the thread, and I expect not many have (in great detail), but even if it were just for myself, it would still be very useful.

Looking at a single snapshot of a project, even your own, it’s really easy to take the single version for granted and to to misunderstand how much effort went into it, how many decisions were made, ideas explored, developed or scrapped. Having a log of some of it helps form a deeper connection and understanding of it all.

It’s also a good opportunity to practice the communication of technical topics, which is in my opinion an often underestimated, but very important skill.

I also want to thank those who have been reading these points and engaged them with likes and comments, it is a nice feeling knowing that I haven’t been talking to myself this whole time. Thanks in particular to @Mastic_Warrior for the kind and encouraging words.

Current status & future

I’m very happy with how it turned out.
The result is still a bit rough in some places and I have ideas for a couple more features, but I’m going to be spending less time on it than I have in the last few months.

I think I’m really onto something with the time tracking idea. (I’m certain I’m not the only one who had it!)
Even though I’ve still been using to only track reading time, the goal from the start was to have something more universal, so I’m going to start tracking other activities to hopefully regain even more control over my spare time.

For those interested in something similar, I have some notes which may be useful:

  • Start with small goals that you’re certain you can achieve, stick with them for a while and then reevaluate them.
  • Focus on positive reinforcement only, don’t kick yourself when you fall behind on your goals; Don’t implement the concept of “failure” into your system (components which would emphasize the goals which you hadn’t met)
  • Consider carefully the time frame for evaluating your goals, use an interval long enough to showcase your consistency, but short enough that if you slip up and your goal becomes unachievable in the time frame, it won’t sit there for too long reminding you of it.
  • Build your own stuff. It feels nice. You can shape it into anything your want and you won’t feel trapped by somebody else’s decisions.
1 Like