Devember 2021 - Punch Clock Web App

Progress 8 Nov to 14 Nov

Finally it was time to start working on the data visualization components.
I have some experience in this area and one of my favorite libraries is vega-lite.
In general, I really like vega-lites’s the high-level interactive primitives and sensible defaults for simple visualization, but I do feel that I have reached the limits of the library in this project.

I built a barchart to view daily progress on each goal.
This was simple enough, but I got into trouble when I started adding interactive components.

Vega-lite (and Vega, its parent project), share a similar interface, where the entire visualization, including its interactive components is defined using a single json document.
The library allows you to link some of the parameters used by the visualization to already existing html components which implement the EventTarget interface.

I added a /data/* path to the server which provides data in json format (as opposed the the html fragments provided by other endpoints) to support vega-lite’s conventions.

I was able to build the basic barchart visualization very easily, but I ran into some trouble with my tag filter feature.

In my model, each clocked interval is related to a single preset which may be related to multiple tags.
I wanted an interface where I would select a subset of tags and see a timeline visualization of intervals only related to these tags.
Unfortunately I did run into some trouble when linking input components to vega-lite.
My tag-selector component outputs a list of tag_ids and I would like my visualization component to filter out such intervals which don’t match any of the selected tags.
As such, I need the intersection / overlap operation between two array and unfortunately, vega doesn’t provide it.

I came up with two hacky ways around it.

1. Use regular expressions to mimic an intersection operator

The selector element would return a regex in the form /('tag_id1'|'tag_id2'|'tag_id3'|'tag_id4')/, and

And then I would convert the interval’s tag ids to a string with a simple “tag_ids.join('|')” and then I would use vega’s regex utility replace to inject a special character and then search for it:

indexof(replace(datum.tags_str, selected_tag_ids_str, '$'), '$') != -1

This does work and I kind of love it because it, but I must also recognize that it’s not a good solution because it relies on like three levels of hacks, so I went with the second option.

2. Flatten, filter, groupby

Vega-lite has some pretty powerful data-processing utilities, the ones that are relevant here are flatten, filter and aggregate/groupby.

  • flatten takes a data points’s field of type array and replaces the datapoint with a set of datapoints, where each has a distinct value from the original array, (ie. {a: [1, 2, 3], b: "hello"} -> [{a: 1, b: "hello"}, {a: 1, b: "hello"}, {a: 3, b: "hello"}, {a: 3, b: "hello"}])
  • filter takes a single data point and runs a predicate on it, if the predicate returns true, the datapoint stays, otherwise it’s excluded from further processing
  • aggregate/groupby is in some sense the dual operation to flatten; you give it a set of data fields and it groups the datapoints by the groupby function and applies the aggregate.op function to each group (on field-by-field basis, eg. you can have some fields averaged and other summed, etc.)

Closing thoughts for this week

In general, I like the idea of having the service provide it’s own interface through actual html pages, and also supporting pure-data interface (json/edn/transit) on subset of paths prefixed by /data/*.
This satisfies both self-containment for MVP and facilities for expansion.

I also like vega’s notion of declarative specification of a data visualization, but in practical scenarios, I think it’s likely that the specification would have to be adjusted to rely on the business-specific context where it’s being interpreted.

I think I also found the limits of comfort with hyperscript.
The components where goal selector influences tag selector which influences the timeline visualizations was just one step too janky in my opinion.
I am still impressed with hyperscript and its ability to express each element’s event handling in a near-natural-language syntax, but I do think that react, with a shared atomic state between the three components would have made things quite a bit easier in this case.

I feel a certain sense of dissatisfaction with the amount of presentable artifacts I have this week.
I spent a lot of time debugging and experimenting with hypersciript and vega-lite, most of which turned out to be dead-ends.
I also don’t feel like I’ve explained fully the all the avenues I’ve explored.
If you have any questions or suggestions about the stuff I tried / didn’t try, I will happily answer.

Also this week I started experimenting with video instead of plain screenshots, please let me know if it improves these weekly progress updates, if I should focus on my writing style, or any other feedback you might have.

2 Likes

Progress 15 Nov to 21 Nov

This week I focused on building a page for editing the clockable presets.
As a part of that I also had to build an interface for editing tags.

I began running into issues with the way I have setup my sql queries and the api.
All of it is basically ad-hoc, fewest-moving-parts-possible approach, which was great for setting up the initial prototype quickly, but now, to make any major progress, it feels like I need to bring more structure to it.
I already started moving in that direction by integrating in the migratus tool for managing postgres’s schema.

I’ve also learned a lot about htmx, and more vanilla (non-react) frontend development in general, and at the same time I’ve reached the point where the application is usable (although not polished) for most use cases I had in mind when I started.

So now I will focus on some refactoring to make the codebase cleaner and more resilient before I start adding more features.

Notes

Goodreads integration

I have a basic form for adding and editing clockable presets, and something similar for managing tags.
I also have basic integration with goodreads api; the books on my “currently-reading” shelf are shown in my application and they can be converted into clockable presets with a single button (including tag generation for basic book-related attributes such as author, title, series).
To prevent creating duplicate presets, the preset created in this way has a reference attribute containing a link to the external object.
Next time the external object is loaded it is not possible to create a preset from it because one already exists with the matching reference value.

Goodreds no longer issues new api keys, but I was fortunate enough that I still had access to key that I generated years ago when I was doing some data collection.

I’m quite used to dealing with json-based apis, but goodreads uses xml.
At first I struggled a bit with it, but fortunately clojure has the clojure.data.zip library which makes navigating xml documents less of a chore.

The zipper library is pretty interesting in general, it provides utilities for navigating and altering deeply hierarchical data structures, not just xml documents.

My service makes the request to goodreads api, when the user clicks the [turn this book] Into preset button.
The request takes a few seconds, so rather than delaying the whole page load, I used htmx's hx-trigger: load attribute.

  • The page loads immediately after the user redirects to it, but there is a loader indicator instead of the data
  • htmx triggers the actual request and runs in the background until the response comes back
  • The response is swapped into the page in place of the loader component

I like this a lot.
Even though the users waits for the data the same length of time, it feels much snappier and responsive.
Also it is really easy to setup.

Swapping-in non-2xx responses

By default, when the server responds with something other than 2xx status code, htmx won’t swap the content into the page.
This is sometimes undesirable, for example when a form is submitted but it doesn’t pass server-side validation (and returns 400), the user doesn’t learn of the error.

There is a way to work around it though; htmx has an extension point in the htmx:beforeSwap event where we can hook in and override the event.detail.shouldSwap attribute.
That way, we can continue to have response status codes which match http’s semantics while also being able to deliver their content into the page.

There is also an extension being developed which would make this even easier.

Demo

2 Likes

Progress 22 Nov to 28 Nov

No new features this week, I’ve been working on cleaning up the codebase as I’ve mentioned last time.
I’m not done with it yet, and I haven’t fully settled on everything, so I will give more details next week.

Here are some notes about the stuff I’ve experimented with so far:

HugSQL vs HoneySQL

HugSQL is the library I used previously for defining my SQL queries.
You use it by defining the queries in separate .sql files along with some metadata related to the library, and the use them by calling autogenerated clojure functions.

It works pretty well, but I reached the level of complexity where I would like to have some reusable bits between multiple queries.
HugSQL has some facilities for this; it has a snippet system and a “macro”-like system where clojure code in special comment blocks is used to generate parts of the query.
Both of these approaches seem rather clumsy to me - they play against the core benefit of the HugSQL library - having the sql code nearly completely decoupled from the rest of the application so one can use sql-specific tooling to work with it, while also not leaning fully into clojure’s data manipulation strengths.

So I switched to HoneySQL.
In HoneySQL you define your queries directly in clojure code, in terms of native clojure data structures.
For example:

{:select [:a :b :c]
 :from   [:foo]
 :where  [:= :f.a "baz"]}

Structurally, it’s pretty similar to SQL itself, but it takes some time to get used to the different ways of expressing it.
The big advantage is that it allows for limitless reuse of query components, because they’re just regular pieces of clojure data, and clojure is pretty great at composing this kind of thing.
HoneySQL also has lots of helper utilities which make it even easier.

Reitit

reitit is the library at the core of my web application.
It seems to me that the reitit project is trying to fulfill two roles:

  1. As a general purpose “router” library, which parses urls and their params in whatever context you need
  2. As a microframework for ring web applications

On its own, the router is pretty great and it’s easy enough to work with, but the great people at metosin built a whole ecosystem of tools and libraries for building web applications.

This is all awesome, but I feel like their docs suffer because of it.
They are clearly written by somebody who understands very well how to use each component and how to glue them all together, but they sometimes omit that from the docs along with the reasoning for “why” they were designed this way.

In my experience this leads to a lot of trial and error when trying to put together an application whose structure is different from the ones shown in the project’s examples.

To be fair though, many of the tools are aimed at ease of debugging, they’re well worth the effort needed to get them setup.

Reverse routing in hiccup templates with reitit and clojure metadata

My components assemble representation of the html first in the hiccup format which is then converted to html and served by the app.

Often, one component refers to another resource’s url, and I previously handled that by literally assembling the url in the component itself.
For example:

(defn interval-detail
  ([interval presets] (interval-detail interval presets {}))
  ([{:clocked_interval/keys [id active_during] current-preset-id :preset/id}
  ...
       [:div.control
        [:button.button.is-danger
         {:hx-delete (format "/api/intervals/%d" id)
          :hx-confirm "Delete interval?"}
         "Delete"]]]])))

Here, the path to delete the interval resource is fully formed within the component, so in essence, the component has to be aware of the route scheme, and must be able to assemble the urls correctly.
In this case it is quite simple, but it can get complicated and clumsy very quickly when more complex paths and query parameters become involved.

To simplify things a bit I am now making use of reitit's reverse routing feature.
In the routing table, the url is identified with a keyword, for example: ["/api/intervals/{interval_id}" {:name :api-interval}]
And then the component only has to know the keyword, and the router can be used to construct the path:

        [:button.button.is-danger
         {:hx-delete ^:route [:api-interval {:interval-id id}] 
          :hx-confirm "Delete interval?"}
         "Delete"]

I use clojure’s metadata to annotate the route’s keyword and the route’s params here to indicate it is not part of the regular hiccup structure and should be reverse routed.

The substitution is taking place using this utility function:

(defn substitute-routes [router form]
  (letfn [(swap-obj [obj]
            (if (:route (meta obj))
              (let [[route-name path-params query-params] obj]
                (-> (r/match-by-name router route-name path-params)
                    (r/match->path query-params)))
              obj))]
    (prewalk swap-obj form)))

prewalk is a built-in clojure utility which traverses the entire hiccup tree and substitutes all objects using the specified function, in this case I check if the object has the {:route true} metadata and to interpret it, otherwise leave it unchanged.
reitit's match->path constructs the target url including query parameters.

Next week, I will continue the big revision of the codebase.

2 Likes

Progress 29 Nov to 5 Dec

I nearly completed the refactoring this week, but work has been pretty hectic and I felt I needed some time away from the computer so I still have a little bit to go.

Next week, I’ll post some more thoughts about the new shape of the application and my plans with it going forward.

1 Like

You are seriously putting in some good work. This will be good for a resume/portfolio.

1 Like

Progress Dec 6 to Dec 12

The grand refactoring is basically done, only thing left is to cleanup a few things and check that everything is working as it should.

I originally thought it would take less than a week and looking back at it now, I think one week is a reasonable estimate for the amount of work, but I think in order to avoid burning out, it’s important not to force oneself into spending every minute of free time on the same side project or activity.
Also, I can’t help myself but to dedicate some time to advent of code every day.
I’ve been enjoying it so much, I’ve gone back to take a look at puzzles from previous years.

I had to cancel some plans because of covid restrictions, so I have some free time coming up and I have some more ideas I’d like to experiment with in this project.
I probably won’t do all of them, this is just a bit of initial brainstorming:

  • Use htmx's history api integration in place of full page reloads - should make the app a bit more pleasant to use
  • Customize the bulma css distribution - I’m currently using the default colorscheme and I have to say I’m not a huge fan of it. I may try to play around with it a bit.
  • Drop the dependency on my goodreads api key, and use some open provider of library data; perhaps https://openlibrary.org, or https://bookwyrm.social. I still have to do more research on that. Perhaps pair it up with an active search feature backed by htmx.
  • Open the app to other users - I would have to build a user management system, make sure each user’s data is naturally separated, keep track of login information etc… This would be a pretty major extension of the project and one I didn’t originally plan on, but it might be a fun challenge.
    I would probably use some external SSO provider instead of managing all of the user identification stuff myself.
  • Add some more “business” logic, for example make sure only one activity can be clocked in at a time, or that no preset can be edited while it’s being clocked etc… More thinking required on this either way.
  • More data visualization options - control the aggregation interval, more charts (perhaps github style calendar heatmap)

Some technical notes from last week:

malli for data validation and coercion

In the previous version of my backend application, I used built-in ring middleware to do obtain the parameters sent with each HTTP request, and I had to manually parse the data and make sure all required field were supplied.
This also meant I had to have custom logic to check everything and to throw an appropriate error when something wasn’t right.
Building a comprehensive data validation layer this way would be way too much work, so I didn’t really bother with it outside of the most obvious cases.

Now, I am using the malli library to do the data validation and coercion.
It integrates nicely with the reitit library (its built by the same company).

It serves a similar use as something like an openapi spec - it is a way to declare which data is accepted by each endpoint.
Unlike openapi, the malli definitions aren’t published by the service itself by default, but the reitit-malli library has some utilities to do so.

The schema of each endpoint’s parameters is declared in its definitions in the router, for example:

       ["/presets/{preset-id}" {:middleware [preload-preset-with-tags]
                                :parameters {:path [:map [:preset-id [:and [:> 0] :int]]]}}
        ["/card" :preset-card]
        ["/card/actions" :preset-card-actions]
        ["/form" {:name :preset-form
                  :middleware [parse-form-tag-ids]
                  :parameters {:form [:map
                                      [:label :string]
                                      [:type :string]]}}]]

(each endpoint under /presets/{preset-id} takes a path parameter preset-id which is positive integer, and the /presets/{preset-id}/form endpoint takes form arguments label and type of type string)

Not only is this really great for validating the incoming data, but it also parses it to the specified types.
For example, when called as /presets/10, [:preset-id :int] would parse the value as {:preset-id 10}, whereas [:preset-id :string] would parse it as {:preset-id "10"}.

This is an amazing convenience, but it isn’t without its limitations.
For example in this particular endpoint, the form also contains a list of tags associated with each preset.

The list of tags is created from a list of checkboxes in the page itself.
The <input type="checkbox"> element forces a certain representation of the checkbox’s state: when it’s active, the form will contain the field $name: "on" ($name is a variable chosen by the developer) and when it’s inactive it will not supply anything into the form.

So the form body may look like this:

type: "some preset type"
label: "a very cool preset"
tag-10: "on"
tag-13: "on"

As far as I know there is no force the html components to aggregate the list of tags into a single field, and there is no way to declare a “wildcard” field in the validation map in malli, so I had to take care of this using a small custom middleware:

(defn parse-form-tag-ids [handler]
  (fn [req]
    (let [params (:form-params req)
          tag-ids (->> (for [param (keys params)
                             :let [[_ tag-id-str] (re-find #"^tag-(\d+)$" param)]
                             :when tag-id-str]
                         (Integer/parseInt tag-id-str))
                       (into #{}))]
      (-> req
          (assoc-in [:parameters :form :tag-ids] tag-ids)
          handler))))

It pulls the raw data from the :form-data field generated by ring, parses everything that looks like a tag- field and adds it to the [:parameters :form] field, which is the same as used by malli.

Although this is a slight wart, it must be noted that the malli devs are aware of such use case and have been discussing ways to address it in a future version of the library.

1 Like

Progress Dec 13 to Dec 19

Nothing shiny to show off this week unfortunately.
I imported all of the data from my previous tool to the database I setup on linode for this project.
Unfortunately, the way I implemented the tag filtering in the vega-lite chart is not very performant and it completely tanks with the 400+ clocked intervals on 50+ presets, also the full year of data is not super legible.

I thought I would get it finished by the end of the week, but I’ve been stuck in a loop where I keep coming up with different ways to rebuild it.

Wow, were were you hosting it before then? I am doing the CPU share 2 core, 4GiB RAM, 80GiB HDD plan on Linode. That performs better than my 6300-FX PC at home.

The performance trouble was in the client-side javascript code under the timeline visualization chart.

All of the data is loaded into the client, but the chart has filtering criteria defined in it.
The expression language doesn’t have anything great for checking collection overlap (collection of tags on an interval record and collection of selected tags for filtering), so I hacked it together with some array operations.

As a result it was doing (number of defined tags) * (number intervals) * (mean number of tags on an interval) comparisons on every redraw event.

For small amount of test data it was fine, but it exploded quickly when I added more.

1 Like

Progress Dec 20 to Dec 26

Once again I found less time for this project this week than I had hoped (Advent of code days 23 and 24 were quite a challenge).

I implemented two new features.

Timeline range controls

Added two date selectors to the overview page.
Their value is read into the chart’s code through vega-lite’s parameters interface which I’ve written about before.

They’re just plain basic native date inputs so they work well in both browser and on mobile.

The chart handles them by subscribing to their input change events so the interactions is quick and snappy.

Clockable preset generation from OpenLibrary

My implementation follows pretty closely htmx’s Active search example.

It works like this:

  • the htmx code listens for events on a text input box
  • after the user is done typing for 0.5 seconds, it dispatches a request to the server and swaps the response into a specified dom element
  • while the request is in flight, it automatically displays a “Loading …” indicator in another specified dom element

I route the request to my own server instead of directly to https://openlibrary.org/ because I need to convert the json response to html fragments representing my components.

There is another way to do this with the client-side-templates extension.

With this extension properly configured, htmx pulls the response data through a template document before swapping it into the dom.

I may revisit this later, but for now I prefer to leave all of the templating logic unified on the server.

Demo

In the demo you can see how I create a new preset after searching OpenLibrary for the book title, then I reassign several existing intervals to the new preset, and finally I show that the timeline chart uses different colors for intervals based on their associated presets.

You can also see a very annoying flicker in the interval editing view when I select another interval.
Currently I treat the change of selected record as a page change, so the browser reloads everything, even though the interface is designed to be very similar in both pages.
It’s really unpleasant so I’m going to take a look at that next, I’m going to use htmx to swap out the relevant elements without performing a full page reload.

It’s really unpleasant so I’m going to take a look at that next, I’m going to use htmx to swap out the relevant elements without performing a full page reload.

So this turned out to be a pretty simple fix.

Here’s the diff and explanation:

-       {:href ^:route [:page-interval-detail
-                       {:interval-id interval-id}
-                       (when selected-preset-id
-                         {:preset_id selected-preset-id})]}
+       {:hx-get ^:route [:page-interval-detail
+                         {:interval-id interval-id}
+                         (when selected-preset-id
+                           {:preset_id selected-preset-id})]
+        :hx-push-url "true"
+        :hx-select "#page-content"
+        :hx-target "#page-content"
+        :hx-swap "outerHTML"}
  • (the response from ^:route [:page-interval-detail ...] is still the same as before)
  • I use hx-get instead of href, so the request will be passed to xhrio and handled by htmx instead of the browser directly
  • hx-select chooses the part of the response which will be swapped into the current page
  • hx-target chooses the part of the current page into which new content will be swapped
  • hx-swap "outerHTML" means the entire target element will be replaced (as opposed to “innerHTML” which would put the content inside of the target element instead of replacing it)
  • hx-push-url "true" places the url of the request into the history stack, so the native “Go back”/“Go forward” browser functionality works as expected

Basically, throwing away current content of the page and redirecting the new page using href, I only swap the #page-content element from the new page to the current page, so there is no full page reload, and push the new url into history, so that the browser can navigate it in the same way as if it were a regular page redirect.

I like this a lot.
This was a really easy change and didn’t require basically any changes in the page’s structure at all and I got much better user experience out of it.

The browser history api is notoriously clumsy, so the fact that I can make it do exactly what I want with a single htmx directive feels like magic to me.

I feel these features of htmx along with the stuff I showcased in the active search feature this week really strongly demonstrates how useful htmx can be.

In the recording, you will notice there is still some delay between the click and the content change, that’s because the request still has to go to the server and back, but the really annoying page reload flash is completely gone and page history works just the same as before.

1 Like

Progress Dec 27 to Jan 2

Technically, this was the last week of Devember, but I’m going to continue adding features and fixing bugs.

I will probably continue to post some development log updates in this thread, but perhaps not every week.

Having the habit of developing, composing my notes, and documenting my progress has been very useful for keeping the project on track.

From my day job, I’m used to two-week sprints, but I for a personal project such as this, a single-week cycle was ideal.

I’ll prepare a detailed demo and an overview of all the work done and post it in next week’s update.

hx-boost

Since the last post, I’ve read up more about htmx’s treatment of history api and ajax-driven navigation.
Turns out there is a hx-boost attribute which automatically converts href redirects to ajax-driven swaps.
Basically it’s the same change I did manually, but it can be applied automatically across the whole website.

The user experience is much better, the whole thing feels much smoother as there are no full page reloads anymore.

Different colorscheme for development deployment

While working on a UI bug, I got into a situation where the app seemingly wasn’t reacting to any of the changes I was making.
I restarted my dev server, tried to delete the component entirely etc…, but it just kept behaving the same way as before.
Then I realized I was looking at the “production” version running on linode, not the one running on my local computer where the dev codebase is.

This was frustrating and a bit embarrassing, even more so because it wasn’t the first time it happened.

So I followed bulma’s guide on customizing the colors, and set it up so that the development version has red nav bar and the prod version blue, so I will not mistake them again.

Timeline overview chart - aggregation window selector

I made substantial changes in the timeline chart, the most important one being the inclusion of an “aggregation window” parameter.

This is a major convenience feature for viewing a large section of the timeline with many clocked intervals.

When the aggregation window is switched, the goal value is recalculated as well.

Unfortunately, I wasn’t able to implement it entirely using vega-lite’s parameters specification.
This is because I use the timeUnit encoding parameter to control the aggregation on the x-axis.
This parameter cannot be parameterized because it controls the compiled vega spec, not a binding to a vega signal.

As a workaround, I manually transform the value in the specification document before it’s passed to the component generating function, and I also added an event handler to the aggregation window selector, which reloads the component every time the value changes.

It’s not great, but definitely good enough.

Timeline overview chart - color by tag category and highlight by tag value

In the previous version, the interval records in the chart were colored based on the name of the preset which recorded them.
I wanted to also add means of coloring by arbitrary tags - such as name of book author, name of the book series etc…

So I parameterized the coloration field and add a selector based on tag category for it.

It took some wrangling of the vega-lite specification, but it works so it’s all good.

Unfortunately, the chart can be quite busy when there are lots of different values for the selected category, so I added handlers for vega-lite’s built-in interactivity features, so I can highlight the intervals of the particular tag value.

1 Like

Jan 4th 2022 - Retrospective

In short: It’s been fun and I learned a lot.

The goal of this project was to build a mobile-friendly web app for myself to help me track how I spend my spare time and in particular how I’m meeting my reading goals.

The secondary goals were to improve my skills in frontend development, and to gain some experience with running a webservice in a less comprehensively managed environment (linode as opposed to heroku / gcp+kubernetes which I’ve worked with in the past).

In those terms, I consider the project successful:

  • I got to know the htmx library, and I believe I’m now capable of discussing its pros and cons compared to other frontend technologies;
  • I’ve learned a lot of things about building a backend service in clojure with reitit to construct the app, hugSQL or HoneySQL to manage postgres queries, malli for data validation and hiccup for html templating;
  • I’ve learned how to setup nginx (with Let’s encrypt certificates) and postgresql on an Ubuntu box running on Linode and how to setup the DNS records to expose it to the internet ;
  • I’ve gained insights into how to manage my workload on a solo project spanning multiple months without burning out on it;

And most important of all, I’ve built a webapp which is useful to me daily and this makes me very happy.

Commentary on acceptance criteria

This is the acceptance criteria as I’ve stated them in the original project plan:

The user is able to access the application through web browser on their computer and on their phone.

With the app deployed on linode and linked to my personal web domain, I can get to it from any device with internet access.


The big useful trick related to this is this one magical html directive which makes is usable on a mobile screen:

<meta name="viewport" content="width=device-width, initial-scale=1">

The user can only access their own data.

Success by default - only one user has access to the app.
Throughout the project’s lifetime, I’ve been toying with the idea of turning it into a publicly available service, where anyone can setup an account and track whatever activities they wish. I never got around to implementing it though.
The primary goal was always to build a tool for myself and my own needs.
I still might add support for additional users later, but probably only as an exercise / proof-of-concept. I’m not really loving the idea of maintaining a features for other people and worrying about breaking it for them.

The user is able to clock-into an activity and clock-out-of an activity, the service tracks the time spent in between.

The clock-in button triggers a http request, which begins a new clocked interval on the server, the clock-out button appears on currently clocked-in activities and when clicked, it sends out another http request which ends the interval.
The frontend can be kept pretty simple because all of the application state is kept serverside.

The user is able to retroactively adjust the bounds of a clocked interval or to delete it completely.

There is an interval editing form, including input validation.
The Delete button triggers a pop requesting confirmation, so that it’s difficult to destroy records inadvertently.

The user is able to track multiple different activities independently.

The activities/presets are individually addressable in the API, so the user can treat them as independent entities.

The user is able to define time-spent based goals for each activity.

Originally, I planned to have a goal editor directly in the application itself, but it was always low on the list of priorities because goals change the least often out of all of the entities.

The goals are defined as database records, so when I want to edit them, I can do it without having to release a new version of the backend.

For now, whenever I’m defining a new goal or editing an old one, I do it directly in the database.

The user is able to review their time spent and how it compares to their goals in a visual interface.

I built a simple table to track the progress on each goal, and a timeline chart for aggregating and exploring stored interval data.

image

The user is able to retroactively adjust the metadata (such as activity details) associated with a clocked interval.

Each interval is linked to an activity preset - this is the entity with the clock-in/clock-out button.
Each preset may have multiple tags which can be edited, and these tags may be shared by multiple presets.
The tags can be used to compute different aggregations of the intervals.
Any interval can also be easily reassigned to another preset

Libraries/tools used

htmx

htmx is a clientside library which extends common html elements with additional attributes to open up new possibilities of client-server interactions without leaving behind the hypermedia-centric aproach.

When communicating with the server, the client receives html fragments as response, which are then swapped into the html document without doing a full reload.
This makes for a better user experience than redirecting to a different pages all the time, and substantially simpler UI definition (as opposed to managing clientside state with a js framework).

It can’t do everything, but for most usecases in my project it hits the right balance of power/complexity.

A few highligts:

  • The hx-boost attribute converts all <a href=...> targeting the origin domain to ajax requests, who’s response’s body is swapped into the current page, instead of doing a full reload.
    This eliminates the obnoxious short flicker that so often occurs on many pure-html sites.

  • The active search pattern - with just a couple of htmx directives, the client is able to dynamically search and load data from the server.
    From the user’s POV, all of the state is represented in the DOM itself, and not in an opaque js application.
    I use this pattern in a component which loads book data from openlibrary.org: Devember 2021 - Punch Clock Web App - #17 by msladecek

I also want to mention the progress on the multiple-swap feature request which I wrote about a while ago.
I ended up contributing a PR to the htmx project and it has been merged into the dev branch so it may become part of the next release of htmx.

clojure

I like clojure, I think it’s really great.
In this project, the serverside logic isn’t very complex so any other general purpose language with a webserver frameword would probably do just as well.

One think that stands out though is the hiccup library for html templating.
hiccup let’s us build html fragments from clojure data structures, so there is no need for any additional templating language with its own set of expressions (like selmer, mustache.js, jinja2, django templates).

reitit

At its core, reitit is a general purpose routing library.
It can parse path and query parameters out or URIs and find the right handler for them according to a specification, and it can also do “reverse routing”, ie. construct a URI for a given handler and parameter set.

The library ships with many utilities for use in the context of a ring-based web service.
The route specification can be augmented with special chains of middleware which will be applied only on certain paths and parameter validation/coercion specs.

Being able to define separate middleware chains is very useful, and I’ve been missing a feature like this in other web frameworks in the past.

malli

malli is library for data validation.
The schema syntax resembles hiccup.
I use it to validate and coerce http request parameters.

I think it’s pretty neat, but my use case is fairly basic, so I didn’t really have an opportunity to dive deeper into some of its more interesting features.

At the same time, I found that some features that I would like were not available in malli.
For example, I have an endpoint which accepts a form with several well known fields, and then several dynamic fields (label and type are known, but tag-$A, tag-$B are dynamically generated).
As far as I can tell, there is no way to easily express this in malli, so I am considering switching to validation based on json schema which supports the additionalProperties field, but if I do switch I would probably have to write some adapter for reitit to make the coercion work.

vega-lite

I’ve used vega-lite in the past, but in this project I’ve discovered some new useful features it has.
I use it to generate the timeline visualization chart: Devember 2021 - Punch Clock Web App - #19 by msladecek.

With vega-lite, you define your visualization as a static json document - you declare where to get the data from, what to draw on the x axis, the y axis, what attribute should decide the color etc…

In this project, I learned about the interactive parametrization features of vega-lite.
It is possible to define a parameter binding pointing to an element outside of the visualization itself.
The library then installs its event handlers on the elements and adjusts the visualization when the parameter values change.

It’s a bit like having a small reactive framework to control the interactivity.
The only downside is that not all fields of the specification can be parameterized this way, so in one case (the time unit selector), I resorted to triggering a reload on the entire chart: Devember 2021 - Punch Clock Web App - #19 by msladecek.

linode

I have nothing but positive things to say about linode.
At this point I’ve only played with the DNS configuration tool, and with some basic stuff related to setting up a small linode instance.

They have pretty good docs, and I’ve learned a lot of when setting up my services.
Particularly interesting to me were the things related to nginx setup and TLS certificates with Let’s Encrypt.

General thoughts on the project

This thread

From the start I’ve been posting weekly updates in this thread.
I tried to pick out some interesting problems I’ve faced and things I’ve learned.
I think the commitment to composing a progress update post (and therefore having to have made some progress) was key in staying on track for 2+ months.

I would wholeheartedly recommend running a similar periodic progress log when working on a solo project like this, even if you decide to keep it private.
I don’t really know how many people have been following the thread, and I expect not many have (in great detail), but even if it were just for myself, it would still be very useful.

Looking at a single snapshot of a project, even your own, it’s really easy to take the single version for granted and to to misunderstand how much effort went into it, how many decisions were made, ideas explored, developed or scrapped. Having a log of some of it helps form a deeper connection and understanding of it all.

It’s also a good opportunity to practice the communication of technical topics, which is in my opinion an often underestimated, but very important skill.

I also want to thank those who have been reading these points and engaged them with likes and comments, it is a nice feeling knowing that I haven’t been talking to myself this whole time. Thanks in particular to @Mastic_Warrior for the kind and encouraging words.

Current status & future

I’m very happy with how it turned out.
The result is still a bit rough in some places and I have ideas for a couple more features, but I’m going to be spending less time on it than I have in the last few months.

I think I’m really onto something with the time tracking idea. (I’m certain I’m not the only one who had it!)
Even though I’ve still been using to only track reading time, the goal from the start was to have something more universal, so I’m going to start tracking other activities to hopefully regain even more control over my spare time.

For those interested in something similar, I have some notes which may be useful:

  • Start with small goals that you’re certain you can achieve, stick with them for a while and then reevaluate them.
  • Focus on positive reinforcement only, don’t kick yourself when you fall behind on your goals; Don’t implement the concept of “failure” into your system (components which would emphasize the goals which you hadn’t met)
  • Consider carefully the time frame for evaluating your goals, use an interval long enough to showcase your consistency, but short enough that if you slip up and your goal becomes unachievable in the time frame, it won’t sit there for too long reminding you of it.
  • Build your own stuff. It feels nice. You can shape it into anything your want and you won’t feel trapped by somebody else’s decisions.
1 Like

This is a most excellent after action report. I am not big on web development let alone front-end and back-end stuff. I have learned some new things along the way and dare I say have been tempted to try to code a project like this.

I am big on the concept of “dog fooding”. If I make something that is useful to me, then it could be useful to others. The only way to develop a better product and answer the end user’s needs is if you actually use it in your day to day practice/business.

Thanks for sharing this journey with me. If I were still hiring people, this would be the exact type of material that I would be looking for in someone’s portfolio.

2 Likes

This is solid dude! I use some kind of OK app on my phone for work. I would love to use this for work and well reading like you showed. Keep it up man!

2 Likes

I could not help myself any longer, having read the entire thread, I could not make it past the Libraries/tools used of the final overview post (I’ll go back to that after this).

When I open this thread I woundered if I should have been checking in on it as it progressed, but I am glad to say I did not need to (I did do it for a few other project, especially later towards the “closure time” and afterwards.

I am glad to say that my initial reservations about the target use of this project were unwarrented, but having seen the conclusion of the project, I would say that they could be applied (eg a clock in timecard for someone on the road) - and mostly that cam down to you realisation of the negative reinforcement color change.

I liked the weekly reports you provided, and I am glad you did not get discourage by any perceived lack of “audience participation”, in the end (like you said) it benefited you to produce a steady flow of results, and for a projects success, that is the most useful thing. The demos help clarify what you were trying to achieve, a nice bonus.

I also like the comment you made about “not over-doing personal project participation”. A lot of developers get burned out by that, and alot of projects fail because of that, especially “single man” projects.

I was glad to see you “bit the bullet” and finally figured out how to change the color scheme, nothing like “nessecity” driving your desire (not to feel like your mouth was full of wiggling toes in this case) - they do say “nessecity is the mother of invention” for a reason

I agree with the judges choice to include this as a finalist in the Devember2021 challenge. I wrote (at length - saome may say too lengthy) about the imapct the seabang project could have. Honestly, I feel the same about this project too (an that brings me to a seperate post).

But what I have been eager to say since about halfway through you project thread is …

2 Likes

Did you test the app from someone elses device or internet endpoint? What are the security reprications of that? Adding a mechanism to default to read only where appropriate, would also side step the need for users and/or authentication.

Did you try (or test) the app through a VPN gateway (where the server is not actually available publically (and therefore cant be hacked - I’m the one with the “butally simple firewall”).

If you have some sort of “read-only” feature as default (on top of some sort of “read-write” device whitelist), you could use (what I call) keyless entry, where you just have to know what the current “key” is, and not a “password” that can be hacked. You can then change the initiator that provides the “key” within the page content. That way you can also change the type of “key” or how the key is presented at will. Without the initiator, you dont even get access to the key, and as the only user, only you know what the key is, where it is, and how to operate it. One you are done in read/write mode, you just remove the initiator, and there is no trace of the “key”

(sorry to be vague and not “show” or “tell” details, but it would defeat the purpose of you impliment an keyless entry).

I totally understand the exponential impact of adding users. I think if you add some thing like I mentioned, you can make the source available publicly, and others can add the “I want to track this thing” parts for you, allowing a “presets” library (maybe?).

I say that, because I believe alot of people (and I dont mean a small “alot of people” - I mean a huge “alot of people” - over time) would use this, especially if they could choose from a “list of things to track”, or for less net interactive based tracking, just add something themselves.

(And maybe this is one of the underlying thoughts of the judges, based on its usefulness, as why your a #devember2021 finalist) this project would allow maintenace and expansion of the public single-user version (available as source), if you got some other “entity” to pay for helping add those additions, whereby they also provide a private multi-user version.

Its just a usefult tool:

  • I can see a company wanting to make it available to 15,000 employees per month (at their leasure).
  • I can see some companies paying dearly for a custom in-house version.
  • I can also see some “granny” tracking her cross-word skills, or her local bingo hall “winning numbers”.
  • I can see little Jenny tracking her “lemonade stand” over the summer break.
  • I can see little Johnny tracking his after school “dog walking business”.
  • (hell) I might even use it to track unfinished project interaction :slight_smile:

Actually, this is the sort of project that would go well on “The Dragons Den”, if you knew what those above 2 paragraphs entailed, you could maintain 100% ownership, while only taking 5-10% of profits.

If you could load a “tracked item” with historical data, then it has the ability to be used for projections as well, or “market trends”.

I am 100% sure (now that you have made it) that someone else will come along with a commercial or free multi-user version (payed for with ads or by selling tracking data), and especially so at this time when a lot of people are force to be at home with the internet.

I am pretty sure there are people in Poland willing to provide enough (backing, accumen, resources, personnel), so you can still do “your thing” without being loaded down with “providing others options they want”.

Anyway, I hope you get alot of use out of it, and expand it as you change your “I would like to track …” focus, and yes I think it would work nicely as a “soft” time-card for “employees & volunteers” etc… (my initial reservations being it would be a “hard” or “strict” version - thanks again for prooving me wrong).

Cheers

Paul

1 Like

Hi everybody, sorry for the radio silence.
I was planning to continue gradually developing this app further, but at this point it basically satisfies all of my needs, so I only do some minor bug fixing and adjustments.

The next logical step would be to setup a multiuser version and open it up to the public.
It would certainly be an interesting chanllenge.
I’ve been working on a list of steps to make it possible.

It would be necessary to:

  • Implement a secure user registration and authentication system
  • Revise to core data model so that users’ data is isolated from one another
  • Implement data isolation in all api endpoints
  • Implement some usage limits to prevent nasty spammers from overwhelming the application
  • Dedicate some portion of my time to the maintenance of the public application
  • Cleanup some of the UI elements (pagination, timezones, better visual feedback when data is edited, etc…)
  • Accessibility
  • Fix loads of tiny janky bugs (I’m currently tolerating them and working around them, but I would feel bad if I exposed other users to them)

I came to the realization that it would unfortunately take more effort than I’m willing to spare in the near future.

I know it’s a bit of a disappointing note, especially since this project has been selected in list of finalists.
I’ll think about continuing it next devember, but I’m not promising anything.

I hope in its current state it can at least serve as inspiration for others to build simple tooling to make meaningful changes in their lives.

1 Like

Did you test the app from someone elses device or internet endpoint? What are the security reprications of that? Adding a mechanism to default to read only where appropriate, would also side step the need for users and/or authentication.

Did you try (or test) the app through a VPN gateway (where the server is not actually available publically

To be completely honest, I haven’t started working on a multi user system apart from some very preliminary planning.
Thanks for the questions though, they seem like useful pointers for further development.
I made note of them and I’ll keep them in mind in the future.

I say that, because I believe alot of people (and I dont mean a small “alot of people” - I mean a huge “alot of people” - over time) would use this, especially if they could choose from a “list of things to track”, or for less net interactive based tracking, just add something themselves.

Thanks, this is very encouraging.

I am 100% sure (now that you have made it) that someone else will come along with a commercial or free multi-user version (payed for with ads or by selling tracking data), and especially so at this time when a lot of people are force to be at home with the internet.

I very deliberately avoided doing research into existing solutions for personal/commercial time tracking before starting this project, but I’m certain that they already exist - mostly as plugins to other applications.

For me at least, the biggest value of this project is that I found a way to regain control of some portion of my spare time.
The application I built for the purpose is secondary.
The technique is more important than the tool.
I wonder if I would feel so strongly about the technique if I had adopted somebody else’s tool, instead of spending so much effort designing and building my own.

This also touches on the thing you mention about the “soft” vs “strict” time tracking as it may be applied in a commercial setting.
Building a tool for punching in and out is relatively straight-forward, but it takes a lot of care to interpret and act on the resulting data in a constructive way.
I worry (as I think you have), that a tool which would encourage some “strict” interpretation would in the end do more harm than good.

1 Like

I meant as a single user. You mentions you tested it various other ways. I do see a single user use case as being useful. That was the basis for the questions, not too convince you “multi-user is a must”, just that certain use cases might come up in the future, and had you considered any potential concerns as a result.

IE atm you could not show any part of your use case on a public computer. Its “safe” to show others on your phone, as long as no ones gets access to it (while its open). It should not be too hard to test various senarios, and/or think of a simple solution that isn’t “multi-user” (or even password) based. Adding multi-user after that would automatically include that level of protection.

Yeah as “part of something else”. Your use case was very specific, with remote resource interaction, which made its use very simple. I think (even if only) you add tracking tasks as you require them, it will increasingly show how other solutions are lacking.

I agree, but I would not overemphisize your personel investment in the code side of things as to contributing to how you feel about it, sure there is a percentage of that, but what you had as a usable app was exactly what you wanted. Your demo’s show a very thourough implimentation of “what could be done”, you could have just left it as a book cover and time tracking (like alot of other solutions would do).

The “strict” & “hard” I was expecting was like that of a vehicle travel log or milage book in a work vehicle, where every second has to be accounted for (something that would end up contributing to the abuse of the user). That was not what you produced, and that simple color change you made and why speaks volumes. I guess my initial expectation is based on my experience with a “punch clock” (and they have never been great).

I would not worry too much about “soft” or “strict”, you did not cut any corners, and there was nothing you left out for your use case (not the same to be said of a lot of other task based time tracking solutions - often paid for). Time tracking is not a new thing, but how you did it is (ie not general, boring, strict or bland) - targeted. This reminds me of the POSIX philosophy, “make it do one thing and do that well”. You are lucky enough to use a foundation that makes it easy to do the same “targeting” fo any future task you might need.

I think if you continue to add tasks as a single user app (with a underlying idea of multi-user somewhere in the future) with the same fullness of effort you did witht he Book Reading, the less likely someone else will want to choose it as as platform for “strict enforcement” - its a karma thing.

And I expect if you chose to add a “clock in - clock out” task, I expect you would impliment it is such a way as you would be happy to use it, and for others to use it (that was my implied feeling in my opening reply - but remember that was not why I was eager to reply).

:slight_smile: