[Devember2021] Hello world 2: The Electric boogaloo

Keycloak “Learning”

Preamble: I have play around with keycloak before but I really don’t understand all the in’s and out. I want to learn more about it so I know all the options and constants when planning out the user structures and how the roles would interact with data. I also need some info how plan out the api’s I need to create to interact with it.

Okay I found a book on keycloak that came out this year. It happen to be on O’reilly so that subscription is coming in handy
Book on keycloak

Why is a book needed, cause the documentation on how to best operate with keycloak is some limited info this I was able to stumble through to get some interaction with the keycloak but know that could be better.

2 hours later

Application integration study with keycloak

so the book has some in good info and explains a decent amount more than the basic information . The book has an associated github with some tools for working with openid to get a better understanding of this standard(and the other) Link to Openid Playground


ch4

… Later same day. post workout & oil change

Getting along in this text mite be close to

This book seem pretty good for what I need have really only one or two hiccups. Looks like the keycloak used for this book is 3 major versions behind even tho it was released this year. It seems v11 was release July 2020(current release is 15). so eh.

There is a golang section which was cool, but it just the standard req/response handlers so I will be looking to gorilla options for this after this

even has a golang section
ch8
desgin thought

read time about 8 hours I am not the fasted reader plus I did most of the excises

Post Book time

Okay I did like this book for the most part(or at least for parts I found relevant which was 75%). It did good job in comparison to what is available on the web on how keycloak works and how to get it do what you want. It did go over some design points which i found useful in addition different approaches to problems with some realism through in(i.e don’t make your own Oauth framework). The goal of reading this book was get a better understanding of what I can and need to plan for when designing this app which it did. There is also some relevant information on how to just do some things I had a hard time figuring out my self. tldr book 7/10 would read if getting started with the keycloak.

Unrelated

I am getting back into this after sometime due to getting promoted to customer,social event, and a bender should be able to work on this full time starting 12/6. I am not sure when this contest endings(I assume end of year), but I will be continuing afterwards anyway.

Reply

@paulwratt Yeah i did see that I’m not sure how long that will take to get that thur there CSRs so i am going to fore go it for now, and put in it for it later, cause I’d like to have any emails be hosted by myself(aka Linode) than use of the paid options. I was looking at there TOS for some paid email providers and they are a lil spooky.

Next Item(s)

I am working on plan and design of the app it self now. I got the ol pen and paper out trying to stop my self from making things to complex.

So I am going to let keycloak act as centralized authorization server (crazy i know, but there are other options.) so this will allow me the most flexibility when creating roles and access, Cause I know I am going to change it a good number of time during this initial creation phase.

I am just shooting for the MVP right now( I’ve had to stop my self like 4 times already for trying to plan for the future)
So that means

  • One “Sever”(Discord term(actually in the docs its called a guild))
  • One message channel
  • Only one type of User (So can read message history and post)
1 Like

211106

Clients

clients in keycloak

so planning out the objects/clients etc…

  • I am going with conduit for the app name for now
  • so from my current understanding I am going to need
    • one for the backend(data)(conduit-backend)
      • not super positive on this on if I use a object relation mapper but just in case i think it mite be redundant
    • one for the browser app(conduit-browser)
    • one for the server side interactions api calls (conduit-server-side)

Groups


At least to start i am just going to have these groups
I am going to lean on keycloak to handle what user have access to what servers(discord term) for now I am not postive this best way, but speed is the name of the game right now I need to get some Proof of Concept completed sooner than later.

Users


Just some basic test users with assigned happy path roles

its noon 30. work to cont.

Openid Connect Framework

Okay the book and from last post did have some example code for using go-oidc but it did not have PKCE which seems very important. so i need to find one that does.
enter image description here

There seems to be two options that le google throws back

they both seem to be good options fosite seems to be bit bigger and is not based of the vanilla go lang package so I will be going with odic.

I fell into the rabbit hole of looking in to both of these
Planning next moves

1 Like

211209

I will do the oidc integration with the code later

For now I am starting on the code for the the application now. I have started with some basic apis and middle ware for the server data type. Just gonna start basic stuff and I will build outward. I am going to integrate the keycloak SSO later. I am having trouble getting it work due to lack of understanding on my part( I have already lost like three days on that, I just need to move forward)

By the time you read this should be on github for this(Github Link).

Note: I did remove the infra yaml from the hub for now. I put it back later once I have the sites project structure more thought out. I know I do want everything in one repo

Short term Action items

Services

  • [ ] Get J SON validation integrated
  • [ ] Get Channels data structure complete
    • [ ] Get Channels Service Completed
  • [ ] Get Message data structure complete
    • [ ] Get Message service Completed

Frontend

  • [ ] Flesh out the UI I made the other day in NextJS with bare minimum servers,channels, and message items (just need to display what server/channel you are in plus the associated messages)
  • [ ] Connect the frontend and backend and wiggle them together for now

Containerization

  • [ ] Containerize the services
    • [ ] Server
    • [ ] Channels
    • [ ] Messages
  • [ ] Containerize Next JS frontend
  • [ ] Thrown them on the cluster
    • [ ] Add them to the ha proxy
    • [ ] Check the freaking DNS

Longer Term Action Items

  • [ ] Open API ( Swagger documentation)

    I think i mite do this sooner than later I think It would help me with doing validation plus code gen for the client for the frontend

  • [ ] Integrate oidc (some how)
  • [ ] Add a database (currently using in code data store)
    Server List
1 Like

211210

CRUD operations for Servers

broken down handlers

So I have the basic REST crud operations for the for the server service. For the Create, Update, and delete aren’t necessary for the MVP, but I thought since i was here and for practice. For the future I will have to the create, delete, and update interface with keycloak to add/remove/update the necessary roles, but that is for a later time I really just need the GET right now and will remove them from the client.

Client → Runner → Mux → Handler → Validation → Server CRUD function

Mux to Handler

local testing ListAll

Open 2.0 (Swagger) Meta + Docs

So looking at what needed to be done I needed a client for the frontend so I figure I’d do two things at one documentation and get some of that yummy code gen. so I did that for this primary start.


Powered by ReDocs

Swagger docs

$ server-api/handlers/docs.go

// Package handlers classification of Server API  
//  
// Documentation for Server API  
//  
// Schemes: http  
// BasePath: /  
// Version: 0.0.1  
//  
// Consumes:  
// - application/json  
//  
// services:  
// - application/json  
//  
// swagger:meta  
package handlers  
  
import "server-api/data"  
  
// Generic error message returned as a string  
// swagger:response errorResponse  
type errorResponseWrapper struct {  
   // Description of the error  
 // in: body  Body GenericError
 ...

Serve the redocs page and the swagger.yaml
Serve both the Redocs and the swagger on the base directory

Don’t think I’ll keep the yaml on forever but I think it will help with code gen once it gets on the cluster mite make it easier.

Next.js Frontend

Alright I am getting into frontend and straight up out of my element.

So I followed the Next.js quick-start guide. I have some experience with js but with bots so eh. I followed the guide to get two pages up and still need to table or something for the get call for the table.

basic is basic

Should be on the github when you see this.

1 Like

211212

Added JSON Swagger Def

This package I am going to try and use for next js code gen (Next Swagger doc) needs the json instead of yaml changed the make and runner to do that and host that file with the yaml. Quick addation just have the go runner server that file and have go swagger create a json version in the make file.

Switch to typescript project type for Next js Frontend

For rest of the 12th I just was trying to learn how to best use Next js and the service I got the codegen to gen some code, but need to learn how Next js wants things. God I am bad at the frontend thing. went with openapi-codegen-typescript cause I just wasn’t all the positive witch one to use in swagger editor.
Codegen working

211213

After many hours I have come the understanding that I have no understanding lol.

Ditching Next.js

I thought Next js would be easier, but I was really struggling with it for some reason It just wasn’t clicking for some reason. SO we are going to react. There is more and better documentation for it( yes I know next js has react, let’s just chalk it up to me being dumb and move on)

Gonna also try out this AXIOS which seems to be a lil simpler than the openapi-codegen-typescript and the swagger editor massive file.


Not sure if the updates where I really don’t do anything but fail and learn are useful, but I figure they mite be to someone. Cause I only knew a few of these things beforehand, but professionally I only work with java and services this full stack thing has bloody large amount of moving parts and most them I’ve never work with.

211214

Went with the Just the React and Axios, and I was able to get that to work, but the frontend is gonna need more work as time goes on, but I have completed my goal of wiring the front and back end together. The only change needed to the back end was to enable CORS with gorilla toolkit so that was straight forward

What been completed

From the 211209 POST

Services

  • [x] Get J SON validation integrated (Ongoing)

Frontend

  • [x] Connect the frontend and backend and wiggle them together for now (Ongoing)

Longer Term Action Items

  • [x] Open API ( Swagger documentation) (Ongoing)

There all on going as add more objects they will needed to added, but there is a pattern I can follow moving forward so the big brain thinking can be used later

Onward to Databases

So i have( bad one) link to the frontend so let try and get the preliminary step to get the backend(database up). SO have pick a type wither relational or document style, but I will see that go ORM support and move from there.

So I do have some worries about doing these integrations before data is near finalization which would leave to rework, but i think standing it up before i start can lead me to making better decisions on what i’ll need for the data, plus data structure is never finalized.

211218

Docker image for postures

Bog standard docker compose
boilerplate docker-compose

Note I plan to turn on ssl mode(link) on later when this gets on the cluster

Using GORM for database

Documentation for this and the package it self makes this rather easy. It is extra dependency, but for now its find. I like ORMs but I understand why some don’t and agree for the most part
Automigration is complete for the server struct
Migration of struct done every time the service spins up

.ENV files and being smooth brain

Took me a few (*cough hours) to get the env files to work ending up using the Viper package. Works rather well. There are some random files that need to be cleaned up from some other packages trial and error, but I will make a push by the time this goes up so if you want take a gander


Will cont to work on filling out rest of the CRUD and project files directory layout will be the next items. After those items I will get to start getting to the channels once that

211221

Database CRUD operations integration

Working to get the CRUD operation working correctly with the GORM and the database.

Create (AddServer)

No id provided letting GORM auto++

Auto increment is on for gorm

Update

Update

Get {id}

model it is on for testing for update and delete

Note that the model object has been added to the server model but is only turn on in the json for the update/delete testing

Update

Update

Update time as been updated as well as the object itself

Delete

Delete call


Note that Server is not in that array, but still in the database

Get (Get all)


Get all Servers which will probably be the most used and really the only one that will needed the most at this time, there others are there for future use.


This was stuff I have done a lot of time in other langues, nothing special. I wont do an update like for the rest of the api, but i figure it best to do one.

Channels

Started the work for adding the channels struct to the Server Struct and figuring out the mux, validation,gorm, logging, and error handling when dealing with a nest rest object.

Code has been pushed.

211223

Kinda lost with the channel addition

So In my head its straight forward, cause adding the channel obj to the Server is straight forward but integrating with the existing middleware, and mux is throwing me for a loop.

0415

Native Postman runs like Sh*t

the native app runs like trash it eats up RAM and CPU. Not really related to this project, but I though it was interesting that is not just a me problem I notice while leaving it open.
Link to Postman Performace Issues (Github)

211228

CRUD for Channels is in a operational state.

It been pushed. Github link

Messaging is next

Note on timeline

I am running short on time for this, because at a minimum I have the following things to complete to meet the MVP. Be Straight up I am not going to be able to finish the MVP with amount of time I have left. I will try for the first two just to see a message on the frontend.

  • Messaging CRUD
  • Frontend in some short of form
  • Containerization
  • OIDC from the keycloak

Even after #devemeber2021 is over, I’d still be interested in ongoing updates, even if it ends at “functionally complete” and not “damn this is a fine piece of coding and design”.

Yeah, the fails are also noteworthy, and will help some other people in how they feel about their own failures, especially if its with the same tool or framework. Maybe even one of their devs will come across the post and think “hey maybe we should have a look at this from a noob perspective”, and fix it - meh, you can always live in hope (or phantasyland).

.env: this is one of the most common web probes I see in my firewall endevours - I protect this file (non-exhistant on my setup) by passing back a 4.5Gb DVD ISO image. For fav.ico I 404 it unless there is a HTTP_REFERER.

Anyway, keep up the hard work, it would be nice to see this completed.

Cheers

Paul

1 Like

211230

Circular imports and GORM

GORM leads it self to putting you in a circular import loop by design and I have come to learn that from trying to add this temp user struct due to them being many to many

So each server is going to have many users and have each user is going to have many servers (in theory)

But I did deem them close enough together in scope to be in the same package like channels and servers.

which leads to the user needing the server to define user and vise versa.

Loop de loop
and in go that is a no go.

So what do.
There are few options

How did I not see this coming?

I did know about circular imports, but just didn’t think about it until I had had already done it. The GORM documentation did help, but its still on me but here:

Okay here(above) do you see it? They are in the same file. I just filtered that out in my head.

Thought I just share that. I would like to move away from GORM at some point cause it’s not the fastest performance wise, but until I get closer to a finalized data model i am going to lean on it to save some time

220101

Now What

So I am going to try and break down each of the services that I have created to SQL schemes/Queries with the necessary item so that don’t have go into the database for any reason.

So stuff like this:


Looking at that code you 'll see items that lock me into postgres,(pgx , & “information_schema.table”) which removes one of the advantages of using pure SQL and that it can be database Agnostic. I may end hating this later, but whatever send it.

I am trying find a way to not have to maintain both the model for json and the sql schema together in the same struct so when I need to make changes or additions i can in one place, but there doesn’t seem to be one that can do that with references to other tables

Here is sqlx, but I still have to maintain the schema separate,

This is sqlx

GORM did that, but doesn’t seem to a viable option for micro services or at least at my understanding for what I want to do with.

Next things

Reading another Book

If I am going to do the microservices things I guess I should read their bible things. Cause I think I missing some information on this topic. Going to Cherry pick some chapters that seem relevant to see if any insight can be gained

Get some schema together

With this I have exactly what I want, but I also have know what I want so a double edge sword.

This is not what I going with, but show some things, but is still very wrong
Server Information

Red: UUID this was easier than I thought to get working, but I will need the extension to be enabled for each database it used in. Just another thing to error check for and implement.

Blue: Postgres allows for arrays, but after reading up on them they do append when updating they update the whole array which is a waste, so I think going with inter-between table for many-to-many at least.

Yellow: just being able to do a reference. I think have be able to alter a table with running a query while still using GORM, but doing so makes me just not want to use GORM or any ORM(go-pg did the same thing).

#devemeber2021

So I failed, but I will continue to work on this item for some time, but I need to find a new job so that may take of available brain availability

Response @paulwratt

I will keep going with this. It was always a study first and it lined up with #devemeber2021. Granted I was hoping to get farther than getting to temporary user data model to implement messages and realizing my database interface conflicted with microservices aspirations.

I have no fantasies that anyone would come to this for lessons learned lol, but it helped me get my ideas in some form organization.

For the .env on the back end I will remove write access or if I am really worried about I will pass them as a argument and keep them in a kube secret

For the .env on the Frontend I will look into your SSFW, but Frontend seems so far at this point.

there is not much in SSFW that can offer you a solution. The best I can do is to say, pull up the “known_urls.txt” an search for “.env” to show the sorts of locations are probed.

EDIT: it should be possible to 404 any HTTP attempt to get /.env via webserver configuration, while still being available to the web app (because its a local service/process accessing it)

As for the rest of your project, even your 1st couple of posts pointed to it being (possibly) hard to complete.

On the Postgres lock-in, there should be a solution to that, I cant say what that solution might be, but my previous experience says its possible, but it might be one of those things where you need to allocate some time to “go down a deep dark hole” (so to speak).

Anyway, I think you did alot, learned alot, and exposed alot (to us) - all well worthwhile.

Cheers

Paul

220105

Running off generator power, ten inches of snow. TBA the return of normal power, water, and data services.

Microservices Patterns

Book : enter image description here

Having been given an unplanned opportunity to read, I’ve been getting through this microservices patterns book.

It dumps a lot information in a very thicc way and enlightens me to how little I knew(again).

So the solution to the issue where I was having with circular dependencies looks like it stems from two areas. One me trying to stick to my object orientated background I’m trying to stick to the data model then tailor the services to that. Second is I’m not using the Saga Pattern.

Granted the Sage Pattern requires an event streaming platform(like Kafka or MQ). This means I need to get some infra structure up. I will going with Kafka for the sole reason I used to MQ a lil bit in my last job, and in the last few interviews Kafka has come and this seems like a good opportunity to get some experience with the framework.

Forward.

210106

Still cranking that generator and LTE hot-spot

Docker Kafka

Alright looking this space in relation to containerization(docker images) there seems to be a couple options at the time of writing this(Jan 2022)

So these seem to be best supported options for docker images.

Straight up I have never heard of the confluentinc and wurstmeister before. Confluentic seems to be a company that seems to be mostly based in providing support of cloud Kafka, it also has own packages to interface with Kafka(I will get to those a lil later). Wurstmeister I don’t have that much insight into besides it seems to have been maintained for a while with a good amount of usage. I will be going with Bitnami cause I have used them for other images in this project and for some reason I am less worried about VM ware than Confluentic

Zookeeper

This something that i needed for Kafka to keep track of its stuff. Its’s kinda seems like it’s a kube controller for kafka.

Here is some article that goes into

Just wanted to note that in case someone look at the docker compose and wonder wtf is this zookeeper thing.

Go and Kafka

So like the docker images I will go over some the options that exist to interface with kafka and then I one I ended up going with(at least for now)

So I ended using segmentio/kafka-go and let the owner of this repo do all the thinking for me link to explaintation of each of these options. Note that confluent kafka is the one I spoke to earlier

Seems like a good option seems to be a similar pattern to the pgx package i used for postgres interfacing

What’s Next

I am going to try and implement the sage pattern for the relationship between the server and channels for channel CRUD

  • so on channel creation
  • json will be sent in validator middle ware
  • then routed to the create channel which will
  • trigger the events so a event will fire off to the server service to check if that server exist
  • then if it does send a message back to the channel service to create that channel with that associated server id
  • and if not we respond back to the channel service with that error and pass that along.

So by it’s self there really doesn’t seem a need to have all of that to check if a server_id exist, but we can add to this later for when need to check who is doing it and if they can or are even part of that server.

Also thinking about this give us extra information to easily go back later implement an audit log for servers. That is far out, but just a thought.

So let see if can actualize something similar to his.

Forward.

220107

Out with the GROM in with pgx

was working and messing around in another project trying to get a better understanding on how I am going to do this saga thing, when I open back this project I forgot that I still have to remove all that grom stuff

So I gotta finish replacing the gorm database Interface with pgx one.

DB Users and Databases

database per service

Since the book told me to and I don’t know enough to not listen to it all the services need their own databases so I figured they should all have their own user(roles) to interface with to in crease visibility

SQL Error handling

Found this on for getting useful information from the error code send back from the database. so I can put in some better error handling

enter image description here

210108 ~ 210122

Had wood to cut and snow to clear, plus trying to dev on generator power and LTE got old. Granted it suppose to snow again tomorrow(today)…

PGX Server Crud is in a working state.

that code has been pushed to Conduit - Github

For the Server service the CRUD operation is in working in some form testing is still needed, but its there. GORM has been replaced with pgx database interface.

It has some growing pain cause I am not sure where to put the sql that I am calling.

pgx, If so?How to?

also link to pgx GitHub - jackc/pgx: PostgreSQL driver and toolkit for Go

There are few new things

Okay there are a few ways to connect to the database ‘ConnPool’ was what I used was cause it is concurrency-safe. Granted I haven’t added anything that needs thread safety, but I don’t this to mess with that when and if I get to it (docs: Link to Go Docs)

pgx pool

Each service will have it own DBConnPool cause each them will have their own db user
_


On the topic that each service will have it own database and user I am keeping that in code as well for init setup of each of them. I have not set any way to have this created yet, but will be use full later.

I plan on have a db user that creates database users for each of the necessary services with there associated databases. Then each of those service users would check and create any missing table or add missing fields, add necessary db extension

Its complicated and you could argue that its not necessary, but this adds some flexibility and keeps it so each of the services can’t access any other service database(With the necessary perms in place.(Postgres: Revoke Doc)

_
Okay back to the code above

scany package: GitHub - georgysavva/scany: Library for scanning data from a database into Go structs and more

So this allows mapping of the database response to a go struct. Just make sure you sent it the address of a pointer like seen above so it can handle nils for other structs.

Scany with the use of tags knows with is what is mapped where. (image below)

Model Updates

Looking at the model changes I wanted to point out some things

UUID

What is UUID: big’ol slug delimited alphanumeric unique id
There are a few different version of it, but they should work each other for the most part, I am using a v4 which is just determined by a random number. I will be using this for all non trivial models with in this project.
I am letting the database create these upon insertion.

Server Model Schema

There were two places in the REST code that need to change to accommodate this change. For the Delete and Get Singleton cause the ID is passed in the URI

Currently I am using the “github.com/gofrs/uuid” pakage for the UUID it seems to work and has those conversion helper functions.

SoftDelete

SoftDelete: To not really delete, but just mark deleted and hide it

If look back in the model you’ll see i have time fields for create,update, and delete.

Instead of deleting the row I am updating the time of the transaction on the code side

So this a fairly easy to do it just like the update function. Where some challenge comes in is with retrieval

The way I am hiding the “deleted” images is removing any server that has a populated ‘deleted_at’ field once the all server(s) are returned in code with a function called “removeSoftDeletedItems()” which just iterates over the array and removes the servers with said popped field and returns the clean array.

I know I can do with a better sql select statement, but I will look at timings later cause the only reason I would pick one over the other would be speed or security. Plus replacing it with a sql statement would quick.

So instead of getting all seven servers in this you only get the five that are not “deleted”


“Status” Field

So this field will be for the Saga micro service pattern implementation the saga pattern can lacks isolation when you have more than one moving and grooving about. So that status field will be Semantic lock used like a flag for other sages to notice that a transaction is being perform on this obj and to handle it(waiting or whatever).

So I know I need one of these, but still not positive all the details for implementation for my use case. I don’ think it will really be used all much at the server or channel lvl, but messages and user seem like they need to further explored and will get to that when arrive at that bridge

Status

Whats Next

I have a few options here.

  • I can just copy pasta why thur channel and messages to rip out and replace the existing gorm
  • Figure out where I am going to put the Servers_Users Table
  • Get Users to some state existing
  • State the Sage process

Not positive what the next play is, but should one or two of these.


Forward, Stay warm.

Going great guns, hope you are off the gennie now too (or maybe not? more snow?)

Glad you have been keeping us posted

Got fed up with it and moved south. Getting setting up here then I will be back strong on this.

Thanks for your patience

220524

Alright I have moved to a warmer location and gotten set up

It’s been a few months and I ready to get back into this.
One issue fresh install of the OS, I still have the code repo, but setting it back up will still be required not much but something. Also maybe some skill leakage(I forgot some stuff)

Databases

So the sql setup code I had made this rather easy, but even with notes it apparels the roles for each of the service user need to pushed into the correct ownership roles for there tables.

Easy fix

Not Really an issue, but I will need to try this again and get this automated some how. I will throw that on the list. Then add some test test for each of the services to confirm that it works before running. That can be done later once a CI pipeline has been made and an other item to add to the list.

1647

220525

2321

I had some code written for a saga, Its not perfect, but I would like to start to mess with it.

package saga  
  
import (  
   "fmt"  
   "io"
   "net/http"
   "strings")  
  
type Saga struct {  
   SageName     string  
  Transactions []*transaction  
}  
  
type transaction struct {  
   TransactionName string  
  ServiceName     string  
  TransactionAction  
}  
  
func (t *transaction) GetTransaction() *transaction {  
   return t  
}  
  
//TODO find a better name for this. "TransactionAction"  
// I have no idea wtf to call this. I have spent four+ hours looking for what to call this.  
// It's the action part of the transaction. I am floored on what to call this.  
// If anyone ever read this please let me.  
// Cause a transaction has data and it's a function at the same time, technically its a noun but idk  
  
//TransactionAction generic for the action part of the transactiontype TransactionAction interface {  
   Execute() error  
}  
  
//These Definitions are straight form the book  
//Microservices Patterns By Chris Richardson  
  
//RetryableTransaction Transactions that follow the pivot transaction and are guaranteed to succeed.type RetryableTransaction struct {  
   transaction  
}  
  
//NewRetriableTransaction Constructor for RetryableTransaction  
func NewRetriableTransaction(transactionName string, serviceName string, transactionAction TransactionAction) *RetryableTransaction {  
   return &RetryableTransaction{transaction{  
      TransactionName:   transactionName,  
  ServiceName:       serviceName,  
  TransactionAction: transactionAction,  
  }}  
}  
  
//CompensatableTransaction transaction that can potentially be rolled back using a compensating transaction.type CompensatableTransaction struct {  
   transaction  
  //The is the action that un does this transaction if it fails  
  CompensatableAction TransactionAction  
}  
  
//NewCompensatableTransaction Constructor for CompensatableTransaction  
func NewCompensatableTransaction(transactionName string, serviceName string, transactionAction TransactionAction, compensatableAction TransactionAction) *CompensatableTransaction {  
   return &CompensatableTransaction{  
      transaction: transaction{  
         TransactionName:   transactionName,  
  ServiceName:       serviceName,  
  TransactionAction: transactionAction,  
  },  
  CompensatableAction: compensatableAction,  
  }  
}  
  
//PivotTransaction The go/no-go point in a saga. If the pivot transaction commits, the saga will run until completion. A pivot transaction can be a transaction that’s neither compensatable nor retriable. Alternatively, it can be the last compensatable transaction or the first retriable transaction.type PivotTransaction struct {  
   transaction  
}  
  
//NewPivotTransaction Constructor for NewPivotTransaction  
func NewPivotTransaction(transactionName string, serviceName string, transactionAction TransactionAction) *RetryableTransaction {  
   return &RetryableTransaction{transaction{  
      TransactionName:   transactionName,  
  ServiceName:       serviceName,  
  TransactionAction: transactionAction,  
  }}  
}  
  
// OKay I am looking back after 2 months of writing most of this, and I am not positive how the theis RestProxy connects  
// to the code above  
  
type TransactionRESTProxy struct {  
   TransactionAction  
  HttpFunction string  
  URL          string  
  Body         *io.Reader  
  Response     *http.Response  
}  
  
// NewTransactionActionRestProxy is a constructor for the transaction REST Proxy creation just to make sure values make sense before making that call to the other services, or it own servicefunc NewTransactionActionRestProxy(httpFunction string, url string, body *io.Reader) (*TransactionRESTProxy, error) {  
   var ErrMismatchHttpFunctionWithBody = fmt.Errorf("[ERROR] [SAGA] [TRANSACTION] [REST PROXY] [CREATION] | Cannot have body with this %+v | GET or DELETE", httpFunction)  
   var ErrMissingBodyForAssociatedHttpFunction = fmt.Errorf("[ERROR] [SAGA] [TRANSACTION] [REST PROXY] [CREATION] | Cannot have missingbody for this fucntion %+v | POST or UPDATE", httpFunction)  
   var ErrNonSupportedHTTPFunction = fmt.Errorf("[ERROR] [SAGA] [TRANSACTION] [REST PROXY] [CREATION] | %+v is not a supported http Fuction: Supported Functions: POST, GET, UPDATE, and DELETE", httpFunction)  
   groomedHTTPFunction := strings.ToUpper(strings.TrimSpace(httpFunction))  
   switch groomedHTTPFunction {  
   case "GET":  
      {  
         if body != nil {  
            return nil, ErrMismatchHttpFunctionWithBody  
         }  
  
      }  
   case "DELETE":  
      {  
         if body != nil {  
            return nil, ErrMismatchHttpFunctionWithBody  
         }  
      }  
   case "UPDATE":  
      {  
         if body == nil {  
            return nil, ErrMissingBodyForAssociatedHttpFunction  
         }  
      }  
   case "POST":  
      {  
         if body == nil {  
            return nil, ErrMissingBodyForAssociatedHttpFunction  
         }  
      }  
   default:  
      {  
         return nil, ErrNonSupportedHTTPFunction  
      }  
   }  
   //TODO Figure out if should do any check on the url  
  //I am guessing I should check if it in the correct domain like internal, but later  
 //That means I need a file where I keep the domain at for later // // I could also have it be dynamic like a list that updated when services in the "backend" so have script // run at the start of each service // I have a lot idea about this, but none seem to pressing to getting this operational  
  return &TransactionRESTProxy{HttpFunction: groomedHTTPFunction, URL: url, Body: body, Response: nil}, nil  
}  
  
func (trp *TransactionRESTProxy) Execute() error {  
  
   client := &http.Client{}  
   req, err := http.NewRequest(trp.HttpFunction, trp.URL, *trp.Body)  
   if err != nil {  
      fmt.Print(err.Error())  
   }  
   req.Header.Add("Accept", "application/json")  
   req.Header.Add("Content-Type", "application/json")  
   resp, err := client.Do(req)  
   defer func(Body io.ReadCloser) {  
      err := Body.Close()  
      if err != nil {  
         // Need to find a way to make this return an error.  
 //  }  
   }(resp.Body)  
   trp.Response = resp  
   return nil  
  
}

I am not positive that any of this will be useful, but I need to figure out how I am going to keep track of each of the sagas. Cause there doesn’t seem to much in go for sagas that is written

Microsoft and Sagas
Microservices Sagas

There appears to be two different ways to go about implementing sagas

Choreography

Spring (Java) Example of Choreography

So this is the event message system and why was looking at Apache Kafka. I think this more used one or at least form what I can tell the other option being

Orchestration

Spring (Java) Example Orchestration

So this one has the start of the order be created at the start of a saga life and orchestrate it through its life.


So between these two I can see some things the Orchestration would seem to allow me to use the code I have ready written with the saga code above and the status i have in the models for each of services, but that’s not really a good reason. The Choreography seems to more scaleable cause it whole new container to deal with sagas events that scaled to the needs of the system. This does working with another system cause creating my own event message management system seems a bit much for my given scope, even tho external dependencies is not preferable.

I am not positive which one, but I think i’ll try the Kafka one first and see what I get.

Not the most eventful update, but I am getting back into it. Feels good and lil overwhelming cause I have forgot a lot.

220531

So I am starting with Choreography pattering so that means going with Kafka I have worked with MQ in the past but the future is now.

So looking at the documentation for Kafka there is new feature that been in the tested in Kafka for a few years (since 2020) that removes the required zookeeper meta manager under the title KRaft

Which is also the same spelling as Kraft Foods so gotta use those “” “” when searching for it

So this adds a consensus layer into the Kafka and thus removes the additional zookeeper app needed to run kafka. Granted is has additional labels that go along with like Not Ready for Production and For Testing. So I am going to use it anyway cause If I am gonna be doing the learning mite as well be new material. I mean its not like it companies don’t upgrade for years causing large technical debt cause it expensive and entropy (Right? Right…)

Let me see if I can get it working.

220601

Getting Kafka to some operational state

So I got the KRaft to work I think, then I realized I really didn’t understand all that much about Kafka and using the new stuff leaves documentation rather low, but whatever.

Okay it seems to working with it kakfa-go I looked back my notes and still want to go with this package to interface with Kafka with. Even tho on the readme it seems to only support K# 220629
_0610
Okay looking into it I have a better idea of what Kafka is, but has came with some issues I need to have each service have it own message command system thingy. This comes with a lot of extra items

  • Each service is going to need another running main to listen for message sent via Kafka
    • So going to need a whole another running program(main.go) use concurrency to listen for REST and Kafka Messages at the same time.
  • I need to make a schema or pick a command schema to get the services to send and receive commands cause right I am going sending byte arrays that were once strings I think i can use JSON which means I could just send a REST command if pick to make a whole new main to run or just skip and send it right to middle-ware/handlers
  • Saga orchestrations so I did some work on this, but I needs to make a way to pump these sagas out have them keep track of whats happens.
  • Do I create one big cross service topic-group or break them down, cause only having one feelslike an incorrect way to do,
    • Positives of having just one big cross-service topic-group
      • could have savings with not having to have separate containers for each listener that service interacts with or having
      • granted I could run them all one main.go with concurrency depending on traffic and available compute would mean it could slow.
      • Would not have to create and track the separate list of listeners for each of the services and update them when a new service comes online
    • Negatives of having just one big cross-service topic-group
      • That’s a lot of messages for each of the listeners to just wade through
      • I am going to need to create a Sage Orchestration Tracking number thingy so that listeners know that messages is meant form them, but I think i mite have to do that anyway cause one mite not get it or something and other Sagas mite mess up the order if done at that time
      • Separations of concerns seems to be broken with this cause down the line there is bound to be services that will never communicate. This mite also have security concerns

_0729
Random thought: I had I am going to need to figure out a global timeout time for services and messages cause I know I am going to have things trying to access data that will be in the Pending state so a retry will be needed cause I need to not link data to data that is in the pending state. So error handling will need to be done and the more I think about it that gets complex.

afka up to 2.6.1 and I am using Kafka with KRaft turned on in 3.2. Would appear Apache backward compatibility is working with at least this simple example.

I have some idea of what I need to do

I think I need to add add producer events and consumer events to the services that I have created. Each of the services will have the REST way for clients to send CRUD operations, then for inter-service communications between services they would use Kafka producer and consumer. I’m not positive if need to replicate the CRUD operations for the events, but I know I will have to have at least approve/decline. So I think I will start with that for channels and servers. Then add CRUD event support as needed for now.

I’ve found that having only these simple services it making it difficult to think about. When I look at more complex examples of this pattern with 4 or more services and many more transactions types it some how makes for sense for some reason. My services don’t really do all that much which makes me think I’m over thinking it idk.

220629

_0610
Okay looking into it I have a better idea of what Kafka is, but has came with some issues I need to have each service have it own message command system thingy. This comes with a lot of extra items

  • Each service is going to need another running main to listen for message sent via Kafka
    • So going to need a whole another running program(main.go) use concurrency to listen for REST and Kafka Messages at the same time.
  • I need to make a schema or pick a command schema to get the services to send and receive commands cause right I am going sending byte arrays that were once strings I think i can use JSON which means I could just send a REST command if pick to make a whole new main to run or just skip and send it right to middle-ware/handlers
  • Saga orchestrations so I did some work on this, but I needs to make a way to pump these sagas out have them keep track of whats happens.
  • Do I create one big cross service topic-group or break them down, cause only having one feelslike an incorrect way to do,
    • Positives of having just one big cross-service topic-group
      • could have savings with not having to have separate containers for each listener that service interacts with or having
      • granted I could run them all one main.go with concurrency depending on traffic and available compute would mean it could slow.
      • Would not have to create and track the separate list of listeners for each of the services and update them when a new service comes online
    • Negatives of having just one big cross-service topic-group
      • That’s a lot of messages for each of the listeners to just wade through
      • I am going to need to create a Sage Orchestration Tracking number thingy so that listeners know that messages is meant form them, but I think i mite have to do that anyway cause one mite not get it or something and other Sagas mite mess up the order if done at that time
      • Separations of concerns seems to be broken with this cause down the line there is bound to be services that will never communicate. This mite also have security concerns

_0729
Random thought: I had I am going to need to figure out a global timeout time for services and messages cause I know I am going to have things trying to access data that will be in the Pending state so a retry will be needed cause I need to not link data to data that is in the pending state. So error handling will need to be done and the more I think about it that gets complex.

220630

0645

Okay while looking through the code for the channels I realized I had not run it since moving everything to a new drive and while also looking at it kinda confused.

So while I did created the DB User for the channel service, I did not do anything else so for future me I am going to write it down, but eh SQL queries I left my self in file were a big help

  • Have the Super user create the Database with the owner set to the new service db user.
  • Log in to the new db user and connect to that new Database
  • Add your extension that you need, and confirm they are there
  • Create the Table with your columns
  • Last Note: You have been putting the queries in file in database/ folder of each other services use those and keep making them until you have a ci pipeline do for you or at least some automation/script/Dockerfile something do it for you.

So I added a get every channel in to the get handler and model rather straight forward. This was more a way to make sure the service still works. I also never did add the Delete,Update to the model at all. Looking at it I need to rework how the URI server id/channel id works cause I am have confused myself with this seemly simple thing.

I think I am going to tighten up the channel service to at least match the server service before I put all my big brain Saga plans into action.

Personal Notes
I keep catching myself saying there are a lot parts in this thing a few times while working on this. This is new idk why maybe I am just getting old idk just an odd note.

_
Let me commit all this junk I am tiered of being in the branch no good reason.

220701

1034
Working on fleshing out the channel service
noticed I have a lot of repeat code for error handling for sql errors that could be exported it’s own package. So getting it down to add to the list of shit that needs to be completed for this.

Also moving to a new branch ate one my env files so that was annoying, but eh.

220707

Channel Service update

-0703-
I Got the Get to were I want it to be about now. I just finished the PUT It works, but took some messing about with. I spread the GET to different functions inside the handler for the separate Parm options. I really just need to read the docs for the REST standard cause technically its own service, but data structure wise it connected to the server. I mean at this level I one could argue that I should have just left them together, but I plan on making the channel more complex and that one could argue it should be its own service. I really don’t know I go back on forth every time I look at it. I assume its semantics, but I am probably wrong when I did this professionally I did not have think about just copy what the architect put. I mean I asked a few of them which came down that’s just how we do it. I also think about how long this taking me which is just sad, but then I don’t already have a huge code base to just look at see what’s right or anyone to tell me what they want cause it just me. Its Odd feeling one of some joy and aimlessness. Whatever that was odd tangent

2036
Okay I have lookup the name conventions Resource Naming - Restful API

Divide the resource archetypes into four categories (document, collection, store, and controller).

Document
A document resource is a singular concept that is akin to an object instance or database record.
Use “singular” name to denote document resource archetype.

http://api.example.com/device-management/managed-devices/{device-id}
http://api.example.com/user-management/users/{id}
http://api.example.com/user-management/users/admin

Collection
Use the “plural” name to denote the collection resource archetype.
A collection resource is a server-managed directory of resources.

http://api.example.com/device-management/managed-devices
http://api.example.com/user-management/users
http://api.example.com/user-management/users/{id}/accounts

Store
A store is a client-managed resource repository. A store resource lets an API client put resources in, get them back out, and decide when to delete them.
A store never generates new URIs. Instead, each stored resource has a URI. The URI was chosen by a client when the resource initially put it into the store.
Use “plural” name to denote store resource archetype.

http://api.example.com/song-management/users/{id}/playlists

Controller
A controller resource models a procedural concept. Controller resources are like executable functions, with parameters and return values, inputs, and outputs.
Use “verb” to denote controller archetype

http://api.example.com/cart-management/users/{id}/cart/checkout
http://api.example.com/song-management/users/{id}/playlist/play 

** Apparently Consistency is Key **

The forward-slash (/) character is used in the path portion of the URI to indicate a hierarchical relationship between resources.

Hyphens can used to improve readability - this is one I really did not know about

Never use CRUD function names in the URI

Use query component to filter URI collection Often, you will encounter requirements where you will need a collection of resources sorted, filtered, or limited based on some specific resource attribute. For this requirement, do not create new APIs – instead, enable sorting, filtering, and pagination capabilities in resource collection API and pass the input parameters as query parameters. e.g.

http://api.example.com/device-management/managed-devices
http://api.example.com/device-management/managed-devices?region=USA
http://api.example.com/device-management/managed-devices?region=USA&brand=XYZ
http://api.example.com/device-management/managed-devices?region=USA&brand=XYZ&sort=installation-date

Okay that makes sense and this site what I was looking for I remembering learning most of this after reading, which mean I really did not know it kek.

_
220708_0044
Okay got the basic functionally CRUD operations for the channel service complete. Will have to come back for better Error handling, error messages, robustness, code Consistency, Cleaner code. but from basic test happy path seems to work.

220708_0608
Alright I am looking at the Kafka integration again.

found this. Performance Test between confluent-kafka-go vs sarama-cluster

Granted I have no plan on using the confluent anything, but I did overlook sarama. Sarama was mention in the Kafka-go documentation.

220711

Working on the kafka integration gotta find a better way to user the viper env loader package, cause I am running to problems sending the broker and env information around to different so I think I need to work on this cause I am getting this error(below) I am I have gotten it similar when bring on the other services two cause I try to not repeat code. Not a big issue, but just needs to be address.

2133
Okay this was and easier than I though you can put multiple paths to the viper for the same file name. So this just put the absolute pwd. This is labeled in their docs Link to Github Readme
fix

2217
Alright I got some ability to Read messages I went with whole another runner for this, I thought about having the reader run on the same runner as the rest service. I decided against this for now and I guess I could integrate them if later if needed to make it easier to development.

okay thinking about it I need to rethink the have the one service bus cause it seems like the number of partitions will work the why I was thinking so I think every service will get its own consumer group group and topic

220712

0251
So this is what I came up with I am positive this will change once the integration is more along, but I need to start from some where.

Request

Orchestration

sage orchestration
So in the request I have the Saga with some unique id for the Saga orchestrated and the name. so I know where the to send the response.

Transaction

saga transactions
I don’ think this strictly necessary, but I mite need it. I would be good to know even if not really needed currently

Body

Request Data
This self explanatory this will be the data if any needed to be sent to the other services

Response

Type + Body

Type and body
So I have a quick type I mite change that to response code like the http response codes.Then for the body this would be where error or the wanted response. I mite end up using the type in case sending a is not needed in a no content.

220712.1

Okay I am working on the struct for the message type for in go and I was thinking do I want two different ‘styles’. So I can keep the style of request/response or or hear me out I have a sub struct with a data/body. I am not sure

2159
So I changed some things around with Message for compensation from my planing earlier. to this

I know I will move and change this around has time progresses. I think having the name and id isn’t really necessary once I come with a better system of just pick one.

I am not speed.

So I got it to send the message type, but the issues it its very slow. I am guessing cause its not sending them until it fills up or something. I really have no clue gonna try and figure it it out now

2201713_0008
Okay there seems to be post on this. Its just as I thought it seems I don’t fully understanding the package.
Github -Issue

Unrelated:Thought this was interesting. Go 1.18 Generics

220713

Wtf is the key used for in Kafka I saw the key inside of the go-kafka for the writer and wasn’t wtf it was meant for. TIL I think it to tell kafka what partition to use. This is a little hard for I to understand is cause I’m not really using partitions (or at least on purpose) cause I not needing the redundancy on my local machine for test/development data.

Speeed

Okay the slow kafka message receiving has been corrected and was a simple fix just update the writer from the default for Batch Size from 1000 messages and the Batch Timeout of 10 mil seconds. Granted I was getting more than a 10 mil second delay but whatever

Batchtime/BatchSize

1530
So I am at the point where I need to setup the Mux to grab the intended transaction from the event message and I am no quite sure where I should plug in at so I can try and re use the handlers for the rest. The issue with that I am going to need to pass some http.Response/http.Request which I am going to make/pass and then drop later. The other option thats in my head would be to make new ones that plug right into the model functions directly
enter image description here

Use Existing Handlers Make new Event Message Handlers
++ Reuse code – Make all new Handlers
– Passing un-needed Data ++ Limit what function other service have access to. (cause not every service needs the ability to everything the Rest service does)
++ Some Error Catching is already there ++ Don’t Need to Deal with request/headers/ http things
++Smaller Code base

… I am sure I forgetting or not thinking of some items

I still not sold on which is better or what I am suppose to do. I am going to try and use the existing handler with casting tom foolery to try and not have two handlers do about the same thing.

220718

_0523

Okay I’ve been do throwing this idea around in my head after getting that reader to work last week. I don’t think I need a Kafka just to use cross service just to send REST request between services. After coming to this concussion. My first thought was damn I wasted a lot of time. Then I tried to rationalize it which I think won out in the end.

  • I learned to a small degree how Kafka works and how to integrate it with golang and the package go-kafka. Learning is my main goal of this whole thing, but I would like some at the end I would like to use.
  • Kafka already has retry part and scalability built in, but is not really needed at the current state of things.
    • to get the best use of some of these features I would have to send all REST request through them and that just seems silly
  • I do think making the kafka reader and writer item could be useful later for items that don’t have a easily implementable REST requirement
  • I am still going to have to do the orchestration part of the pattern with sage it self and adding the kafka part right doesn’t seem to add anything of value. SO what I think I am trying to say is adding the kafka messaging between services doesn’t have a huge add at the current state if I am going to just send a REST request.

Now what…
I’m the in the middle of the writing the kafka write that and service can use do some basic test to see if works as expected then move on start on the saga pattern it self.

__
_0555

Wtf is a key kafka:
I have to keep looking this up for some reason

Straight from the Stack-overflow

“Keys are mostly useful/necessary if you require strong order for a key and are developing something like a state machine. If you require that messages with the same key (for instance, a unique id) are always seen in the correct order, attaching a key to messages will ensure messages with the same key always go to the same partition in a topic. Kafka guarantees order within a partition, but not across partitions in a topic, so alternatively not providing a key - which will result in round-robin distribution across partitions - will not maintain such order…”

So in other other word if First in First Out is the desired give it some uuid.

_
_0832

So the generic kafka writer is complete and seems to work. I was feeling froggy and started to work on for generic reader, but ran into mux. So this I can pass a generic REST mux then fill it out each of the services or create more custom one per service and function carry, but both of those are a good deal of work for something I really don’t plan on using.

I have a working reader made just for the server service with a working mux for both GETs so I’ll leave that code in for now.

_2048

Okay after a sleep I have one more thought on the idea. The Sending REST request to that start of the service would work fine, but it would limit the depth of

220719

_2023
So I am designing the saga pattern in golang and the no real Inheritance is fucking me up.

_220720_0174

220726

Okay I’ve been working the saga pattern. It took me a while of paper and thinking to figure it out. I have the concept worked out. Looking at it now it seems simple, but it took me a lot of big thinking to figure it out.

I stuck now how to cast an interface{} into its appropriate variable type. I have an to allow a standardize of the Execute (code snippet bellow) function cause I want to use the TransactionCommand as a collection. I know I could easy just make easier on myself to just have custom Transaction commands in each of the Transaction themselves, but I want to repeatable pattern to allow for REST or Kafka, and anymore Command message functions I mite need/want to use in the future. The idea that I really don’t need the Saga pattern at the projects current point is not lost on me. I have also argued with myself if the channel service and server service should be separated at this point.

type TransactionCommand interface {  
   Execute(i interface{}) (response string, err error)  
}

_
Another item that come to my mind is that if you send a kafka event message. There is no response like REST. Which I knew I and I have worked with in the past, but really tired to do the saga pattern with. So hear me out if you use the saga pattern using a message broker you end needing one permanent listener/reader then another temporary one every time a service tries to Execute a Saga. Cause that permanent one doesn’t have a straight line of sight into the on going function call running the Saga itself. So you you need to open the temporary listener with a unique key in the running Saga if you want a response from any other service you gonna need that temporary listener. I grabbed the example diagram from the microservices website. Is this really big deal not really you can still use the same docker/kube instance of Kafka you already have, but it just seems like a lot compute to use for something that could happen many times a second.

___22:57

Execute(i interface{})…

Okay after overthinking it for about 3 hours I got it go into a http.Request. I tried a few other things before I got here like JSON marshaling and unmarshaling(if you try this you will get a JSON end of line/file not found) and other option I almost did was looping threw the values and just copy/pasting them in which seemed really dumb but I was prepared if I could not think if somthing selse
Casting was the solution granted I didn’t think I could do this with non primitives

	httprequest, ok := i.(http.Request)
	if !ok {
		log.Fatal("could not assert value to http.Request")
		return
	}
	fmt.Println(reflect.TypeOf(httprequest), httprequest)

Output: http.Request {GET http://localhost:9292/servers 0 0 map[] 0 [] false map[] map[] map[] }

220727

_2337
Trying to OOP in go sure is fun. The more I do it the more fun it is…

Just realized I am going to need to rework this saga again

For the Execute interface for the transaction command I made each of the sub interfaces(classes in my head) take in an interface so you can pass anything to the sub Transactions Commands. Just using it cause there are few types of commands can be broken into. I spoke about this yesterday. But I just came to the conclusion right now that the empty structs I was making to accompany it have something later to call it with. I just realized I could just stick what I need in and just remove the incoming parm of the interface. Not a major thing just made me feel dumb cause I did some big brain thing about this I thought then when I went to implement it for the first time I noticed. Meh O well just thought give some insight. Cause this taking a lot of code to get working.

220728

_0852
Lost like 3 hours not cause I was messing up sub typing and design patterns of golang, but cause I did not know how slices work apparently which was fun, cause I was like it can’t be slice it has the more complex part of this. My stupidity knows no bound. I rewrote the Transaction relationship twice before I was sure they would work then before I looked at slice.

220809

Saga Pattern has been replicated

_2017
It took longer than I wanted to and stole my will a few times, but we’re here now.

Okay so there is still some work to be done on it. I still testing it, but I’ll push what I have now shortly.

Saga Pattern for microservices implementation

I am going to try and explain how this how thing works now (Bear with me)

SagaOrchestrator

This is main collector for the Saga it self. Its got a name, and a slice of Transactions

type SagaOrchestrator struct {  
   Name         string  
  Transactions []Transaction  
}

There’s a SagaOrchestrator Constructor that checks the order of the Transactions to insure they are ordered correctly which is important for proper saga execution. I am still testing this now. The count system was picked cause it seemed the most fast to write and easiest to read. I may go back make it so that you have to use the Constructor in the future, but for now I am fine with it cause I pass it around a little

func NewSagaOrchestrator(name string, transactions []Transaction) (*SagaOrchestrator, error) {  
  var sagaCreationErrorPrefix = "[ERROR] [SAGA_PATTERN] "  
  
  so := new(SagaOrchestrator)  
  so.Name = name  
  
  var countOfCompensatableTransactions = 0  
  var countOfPivotTransactions = 0  
  var countOfRetryableTransactions = 0  
  lastIndex := len(transactions) - 1  
  // Loops through the list of transactions to check for acceptable order has been passed  
  for index, transaction := range transactions {  
  
 //Checks for Compensatable Transactions  
 //Ruling:
 // There cannot be a Pivot Transaction prior to any Compensatable Transaction
 // There cannot be Retryable Transactions prior to any Pivot Transactions  
 if transaction.GetTransactionType() == GetCompensatableTransactionType() {  
         countOfCompensatableTransactions = countOfCompensatableTransactions + 1  
         
	  //Checking if it's before any Pivot Transactions  
	  if countOfPivotTransactions > 0 {  
            return nil, errors.New(sagaCreationErrorPrefix + "[SAGA_CREATION] [Compensatable]| Cannot put Compensatable Transactions after a Pivot Transaction")  
         }  
         //Checking if it's before any Retryable Transactions  
	  if countOfRetryableTransactions > 0 {  
            return nil, errors.New(sagaCreationErrorPrefix + "[SAGA_CREATION] [Compensatable]| Cannot put Compensatable Transactions before any Retryable Transaction(s)")  
         }  
     }  
  
 //Checks for Pivot Transaction  
 //Ruling:
 // There cannot be more than one pivot transaction 
 // There are no Retryable prior to the Pivot  
 if transaction.GetTransactionType() == GetPivotTransactionType() {  
         countOfPivotTransactions = countOfPivotTransactions + 1  
	  //Checks that there is not already a Pivot Transaction in the List of Transactions  
	  if countOfPivotTransactions > 1 {  
            return nil, errors.New(sagaCreationErrorPrefix + "[SAGA_CREATION] [Pivot]| There is more than one Pivot Transaction")  
         }  
	  //Checks that there is not any Retryable Transactions prior in the list of Transactions  
	  if countOfRetryableTransactions > 0 {  
            return nil, errors.New(sagaCreationErrorPrefix + "[SAGA_CREATION] [Pivot]| Cannot put Pivot Transaction After Retryable Transactions")  
         }    
      }  
//Checking Retryable  
//Note: currently 220801 that I do not think I need to check for anything for Retryable, cause the other two cover it. 
//but I will leave this code here just in case
if transaction.GetTransactionType() == GetPivotTransactionType() {  
         countOfRetryableTransactions = countOfRetryableTransactions + 1  
}  
//Checks at last Transaction in the list  
if index == lastIndex {  
	//Checks that there is a pivot if there is any Compensatable Transactions  
	if countOfPivotTransactions < 1 && countOfCompensatableTransactions > 0 {  
            return nil, errors.New(sagaCreationErrorPrefix + "[SAGA_CREATION] | Compensatable Transactions have not Pivot Transactions")  
         }  
      }  
   }  
  // So it passes the checks above its good to go.  
  so.Transactions = transactions  
   return so, nil  
}

Transaction

Transaction is an interface cause there are multiple types( Compensatable, Pivot, & Retryable). There are currently three functions that they have to populate to be consider an Transaction. Sense I am using an interface and won’t access to the data if any directly. Cause golang doesn’t implement classes like other languages(there is no extend). I do have methods in which I can implement to return the data I need for the each of the separate sub types(this took much longer for me really understand and internalize. Which was most of time doing this part).

type Transaction interface {  
  GetTransactionType() sagaTransactionType  
  GetTransactionCommands() []TransactionCommand  
  GetTransactionName() string  
}
GetTransactionType()

This gets a type I created in which I can compare during execution to know what type of Transaction it is. As each of them have there own logic. The custom type was made just to make coding it little easier and to insure an extra space/inconstant spelling/etc… cause I know myself I will mess up and string compare somehow and lose many hours of my life.

  type Transaction interface {  
  GetTransactionType() sagaTransactionType  
  ...
GetTransactionCommands()

This returns a slice of ‘TrainsactionCommands’ Its a list cause the Compensatable Transaction should have two by definition, even if the second one is empty, but it should have two. The others ones only need one, but this is why its a slice and not just the Command. ‘TransactionCommand’ Cause that is workhorse of this implementation of the Saga Pattern

  type Transaction interface {  
  ...
  GetTransactionCommands() []TransactionCommand  
  ...
GetTransactionName()

just returns the name of the Transaction its nice to have when implementing and debugging

TransactionCommand

// TransactionCommand this action part of the Transaction in which Execute would do it and give the response if any.
// this was made to allow both REST and Kafka thus functioning taking an anonymous interface. Doing this allows for both  
// of the possible solutions Actions to be in the same collection  
type TransactionCommand interface {  
  Execute() (success bool, err error)  
  GetTransactionCommandType() string  
  GetTransactionCommandName() string  
}

GetTransactionCommandType() & GetTransactionCommandName()

both just used for debugging. The GetTransactionCommandType() is leftover from earlier prototyping, but I think it could be useful later… maybe.

type TransactionCommand interface {  
  ...
  GetTransactionCommandType() string  
  GetTransactionCommandName() string  

Execute()

This interface method allows for many implementations of it. In which it only needs to return two things a proceed boolean and an error. When going threw this I wanted to allow for just about any type of ‘Command’ as the book puts it and boiled that down in an Execute function that could a HTTP call to another service, Kafka Message, g/RPC, even Service side DB calls, essentially whatever. The Book did not go over this, but its what I took from it. Essentially a list of Actions with a pivot points to undo or complete.

NOTE: I am not positive on the bool, error parms cause I think should be able to just send an error for either and let implementation of Execute deal with which was the main goal of this whole ‘TransactionCommand’. I just don’t want to leave out the option of needing to send an error for some reason, but still wanting to proceed.

type TransactionCommand interface {  
  Execute() (success bool, err error)
  ....  

Saga Execution

A Runner for the SagaOrchestrator its just a simple loop with some logic in it to ad hear to the rules of the saga pattern. The order of Transaction should be confirmed by using the SagaOrchestrator constructor. There are sub functions to deal with undo the necessary Transactions and retrying the ones that allow it. Still testing this logic but it seems to work enough for the write up and any changes will be in the git. I left a bunch of notes/comments in the code to help me write it and remind me when I would come back to it after the weekend.

There are a lot ghost prints in it, but that a future me problem I am still tweaking it in testing.

const RetryableTransactionRetryLimit = 3  
  
type sagaRunner struct {  
   logger             *log.Logger  
  generalErrorPrefix error  
}  
  
//ExecuteSaga  
/*  
  
 Order of Transaction is import in passed ListOfTransactions - Compensatable Transactions: -- NOTE: Compensatable Transactions should have two Transaction Commands. The first one for the intended Transaction and the second one to Roll back that Transaction encase of failure/issues in any of the following Transaction  
 -- Can only be in place prior in order to the Pivot Transaction and never after -- If there is no Pivot Transaction Action then there cannot be any Compensatable Transactions - Pivot Transaction:  
 -- There can only be one per list of Transactions or - Retryable Transactions: -- If the Pivot Transaction exist in the List then Retryable Transactions can only be in after that Pivot Transaction  
 -- Retryable Transactions should never be in the list of Transactions prior to the Pivot Transaction if it exists  
 - NOTE Other Use Cases: -- You can have a Pivot Transaction without Compensatable Transaction(s)  
 -- You can just have list of Transactions without Pivot nor Compensatable Transactions(So just Retryable Transactions (Granted idk why you would tho))*/  
func ExecuteSaga(orchestrator SagaOrchestrator) error {  
   prefixSagaExecutionError := errors.New(prefixSagaExecutionError + "[" + orchestrator.Name + "]")  
   sagaRunnerLogger := sagaRunner{  
      logger:             log.New(os.Stdout, orchestrator.Name+" | ", log.LstdFlags),  
  generalErrorPrefix: prefixSagaExecutionError,  
  }  
   //This is the list of the undo transaction in the event of a failure or  
  var compensatableTransactionCommands []TransactionCommand  
  var hasCompensatableTransaction = false  
  
  sagaRunnerLogger.logger.Println("------- Saga Execution Start --------")  
   for _, transaction := range orchestrator.Transactions {  
      //Pull the List of Transactions Commands  
 // [1] is the intended transaction Command // [2] would be a compensatableTransaction Command  transactionCommands := transaction.GetTransactionCommands()  
  
      sagaRunnerLogger.logger.Println("Transaction Name: " + transaction.GetTransactionName())  
      sagaRunnerLogger.logger.Printf("Transaction Type: %+v\n", transaction.GetTransactionType())  
  
      //Execute the Transaction  
 // To handle result a lil later  canProceed, err := transactionCommands[0].Execute()  
  
      sagaRunnerLogger.logger.Println("--- Transaction Executed Results ---")  
      sagaRunnerLogger.logger.Printf("Can Proceed to Next Transaction: %t\n", canProceed)  
      sagaRunnerLogger.logger.Printf("Was There an Error In that Transaction: %v\n", err)  
      //COMPENSATABLE TRANSACTION  
 //If it's a Compensatable Transaction Type add the second transaction to the list undo Transaction Commands  if transaction.GetTransactionType() == GetCompensatableTransactionType() {  
         hasCompensatableTransaction = true  
  compensatableTransactionCommands = append(compensatableTransactionCommands, transactionCommands[2])  
      }  
        
      //FAILURE IN COMPENSATABLE OR PIVOT TRANSACTION  
 // If get the do not proceed flag or an error (the canProceed is for Transaction Commands that get responses)  if !canProceed || err != nil {  
         // This is just in case you get a Pivot Transaction without any CompensatableTransaction(s) in an effort to try and save some compute  
  if !hasCompensatableTransaction {  
            if transaction.GetTransactionType() == GetPivotTransactionType() {  
               if err == nil {  
                  err = errors.New("got a Do Not Proceed, but No Error(So now you get an Error)")  
               }  
               log.Println(err)  
               return err  
            }  
         }  
         //Check if the Transaction Type is Pivot/Compensatable then undo the Transactions providing the list of undo Transaction Commands  
  if transaction.GetTransactionType() == GetCompensatableTransactionType() || transaction.GetTransactionType() == GetPivotTransactionType() {  
            undoErr := undoCompensatableTransactions(compensatableTransactionCommands, &sagaRunnerLogger)  
            if undoErr != nil {  
               // This process fails I really don't want to tell the end user. So I guess I'll just log it.  
  sagaRunnerLogger.logger.Println(undoErr)  
            }  
            if err == nil {  
               err = errors.New("got a Do Not Proceed, but No Error(So now you get an Error)")  
            }  
            return err  
  
         }  
      }  
      //Check if the Transaction Type is Retryable then try to reattempt them to preset retry limit  
  if transaction.GetTransactionType() == GetRetriableTransactionType() {  
         if !canProceed || err != nil {  
            didItFinallyWork, err := retryRetractableTransactionToPresetLimit(transactionCommands[0], &sagaRunnerLogger)  
            if err != nil {  
               return err  
  }  
            if !didItFinallyWork {  
               retriesFailed := fmt.Errorf("%v [UNDO] | After the Preset Number (#%d) of Retries this RetryTabiable Transaction(%v) has still Failed. Look at contruction of this Transaction and its Transaction Command(%v)", sagaRunnerLogger.generalErrorPrefix, RetryableTransactionRetryLimit, transaction.GetTransactionName(), transaction.GetTransactionCommands()[0].GetTransactionCommandName())  
               sagaRunnerLogger.logger.Println(retriesFailed)  
               // I am not positive If I am supposed to return this error I will for now  
  return fmt.Errorf("%v | %v ", retriesFailed, err)  
            }  
         }  
  
      }  
      sagaRunnerLogger.logger.Println("------- Next Transaction --------")  
   } //End of loop  
  sagaRunnerLogger.logger.Println("------- End of Saga Execution --------")  
   return nil  
}  
func retryRetractableTransactionToPresetLimit(command TransactionCommand, runner *sagaRunner) (bool, error) {  
   var didItFinallyWork = false  
 var err error  
 for i := 1; i <= RetryableTransactionRetryLimit; i++ {  
      runner.logger.Println(fmt.Sprintf("%v [RETRYABLE] Retrying Retryable Transaction %d of %d", runner.generalErrorPrefix, i, RetryableTransactionRetryLimit))  
      var err1 error  
  didItFinallyWork, err1 = command.Execute()  
      //TODO Learn proper error handling  
  err = fmt.Errorf("retry #: %d | error: %v | %v", i, err1, err)  
  
   }  
   return didItFinallyWork, err  
}  
  
func undoCompensatableTransactions(listOfCompensatableTransactions []TransactionCommand, runner *sagaRunner) error {  
   //TODO Need to learn to Wrap errors  
  undo := errors.New("[UNDO] |")  
   errCollector := errors.New("")  
   for _, transactionCommand := range listOfCompensatableTransactions {  
      workAsExpected, err := transactionCommand.Execute()  
      if err != nil {  
         runner.logger.Printf(" %v %v | TransactionCommandName: %v, Error: %v", runner.generalErrorPrefix, undo, transactionCommand.GetTransactionCommandName(), err)  
         errCollector = fmt.Errorf("%v | %v | %v", runner.generalErrorPrefix, undo, err)  
         return errCollector  
      }  
      if !workAsExpected {  
         runner.logger.Printf("%v %v | | TransactionCommandName: %v . Got an do not proceed on the undo that's not good so here is your sign", runner.generalErrorPrefix, undo, transactionCommand.GetTransactionCommandName())  
      }  
   }  
   return nil  
}

Other items

For future me if you see this error in the stack it most likely means the http request timed out and just add some time to the timeout.
"context deadline exceeded"

enter image description here

Greetings all

Instead of continue to spam this forum thread. I took the last month to build a site using Hugo. From which I will put my dev notes.

https://ed-m.dev