Post your coding projects

Rules:
* dont claim other software as your own
* dont be a flaming dingus if you think their project is stupid

suggestion:
* pretty much any project whether its just for learning/personal or for release(unless its just an example from a book or a tutorial or something..)
* dont have to post code, can be pictures, description, transcription of input/output etc

4 Likes

Can't really post anything since I am still wrapping my head around the concept of it, but I am going to attempt procedural world generation with races and all the jazz...
I guess the best way will be step by step, but I want to make sure I can do it (atleast have the idea of how to do it in my head) before I start the actual coding...

ptext

the ptesto.txt is the output file, from it scanned ctest.txt to generate data to reconstruct the originalptest.txt from the ctest.txt block of text.

is for obfuscation of passing encrypted data/slight security but would still ideally be using encrypted data that has plain text characters. so you send the numbers as sort of the payload, and the ctest(ciphertext) is the key, but obviously you can use steganography or whatever other methods of delivering the numbers, but then they could be applied to basically any block of text from a short news article, to a forum post, email, sms etc..

ofweb

is a console for making/managing html(javascript)redirections for use a sub domain/domain, in which you can use it from anything from local files on your own machine, to intranet, to the wild, you just have to have access to what its redirecting to..

basically as a pet project for an alternative to freenet but instead of being centralized it would be anyone at all can have their own portal/hub 'internet'

motp

multiple one time pad
isnt cryptographically secure unfortunately(atleast at this point but was more for learning than serious use)

does bunch of junk to get better rng from c using some math operations against a few sources
(it runs a loop at the start for a second and generates a number and if its over a certain number it updates a total and just runs at max speed for a second. to the effect its not the same number on the same processor even if you locked the speed cause os scheduling stuff) as a processing speed seed, also time, the number the user enters, the amount of time it was at the prompt the user was entering the number from(if they didnt pass the args to not go into console mode)




a tabletop game thing was working on but as sort of like a shell where you call the commands you want to do.

the first one is just some hud type stuff was messing with.
second pic is from a quest engine i made before the later pic
which was a generalpurpose engine, so that i could make new commands/tasks without having to compile everything as a program.(had like 85 programs or some shit at one point)

and the last pic is i added switchable ascii stuff(before i made the gpeng which had it turned on where it could print a banner like.. for smithing you could have an anvil/hammer or something etc.

but the things about the quest engine/gpeng are that was the first time i made program that generated significant portions of the functions from an external data file as scripting basically

also made a program to make the quests/gpeng files, and for the quests some stuff for managing the 'database' that when you pick a place to go, it generates a number to pick from the database of quests(from the non shady if you arent shady, or generates to generate from shady or the regular quests)
to then open that file, so 36 banks in total, just has an index with a number for the total in each bank.

but has a pack manager sort of, that you can export packs of quests you made and import quests others have made, by blanking the thing, and reinstalling every pack you've marked(meaning you can remove the 'base' one if you dont want the default quests) as it doesnt have enough indexing data to know which files are which so.. have to start from nothing to make sure it removes all of the ones if you werent just adding new files and to ensure its continuous for the total of the quests.

mserv

is most recent project is to be a media server basically but not like.. plex where its a personal netflix, more its a tv, it manages playlists/calculates them for distribution based on the lower sublists(in that list anyways) and has an index so you can easily add/remove lists from the other lists and recalculate for
* distributed: where its relatively evenly distributed one segment from each sub list until they all finish and then it restarts
* marathon: where it just plays from start to finish each list from top to bottom and then restarts
also tracks position in each list so you can switch and resume at the next file or the beginning of the last one you watched(optionally)

consists of 2 programs, 1 the mserv actual service/daemon and the mcli to schedule/issue commands,
also made a cheap hack(using ls) to generate the sublists

4 Likes

I have something I could possibly show, just need to think of a good way to display it. It isn't anything special, its just a frontend for my REST API to a mongodb written in python.

hmm could do screen shots of input/output, description(s) of loosely what it does/allows you to do, could link video or whatever. descriptions of input/output etc.

i mean im sure for some projects, people may not be set on how it works, or they might not want to share specific details. if its a closed source thing, or a company project etc.

1 Like

Rise from your slumber

Now that I've finished with my latest article, I'm switching gears for a bit to work on some backup software. I'll post a link when I have some actual code. For now it's just conceptual work.

2 Likes

Hacking at 6502 ASM though I don't really know how it works, I'm just tweaking to figure out how stuff works. (It's a sprite of a skull that I drew in an online sprite editor and exported)

1 Like

going to code a game soon. will post more about it soon.

1 Like

did a bit more work on mserv(well mostly the cli component but a but with mserv also and the mkli)

added option for 'numbered play' where you could interrupt previous instructions to play a specified number of segments and then return to whatever it was doing before(playing or just the low power/wait state).

a multiplier option,

math stuffs kinda

a bit of refining to the math for the distribution of calculated lists(had forgotten have to bootstrap the thing, since it just decides when to put a segment from a sublist by doing division, of the total number of segments, the amount of segments in the list, against how many segments have been played/picked so far in the list.

to the effect that if you have a list of 5 24 segment sublists, you have a total of 120, but the divison will tell it to play an episode of each 1 every 5 segments, so by forcing the first time it comes up, then it will be time to play the second episode as 5 will have already gone, then by the time all 5 have gone again will be 10 and so on. otherwise if none had been picked it would exit the loop(as it only runs once for the outter most section per segment, never having updated the counter for amount of picked so it would just fail every time, but with the boot strap it just screws up the distribution a little bit at the start/end but the spread hasnt been too bad in my testing,(worst saw was like 15%, but average seems like 5~10% spread between when first sublist ends and the last segment in the list)
messed with the multiplier option made some lists between 2500-27000, that even if its just one list is crazy long and the other sublists or short etc, (like one being 25,700 and shortest being 13) was still around 9% spread

some performance tuning/optimization of the multiplication mode so it just shifts the number of segments once it goes past the original number instead of trying to scan 47lines in a 24line file and just rewinding until it finishes, so that it would just minus 24 if it was over 24(in that example) until it was below 24

which speeds it up alot when you get into scanning the same file to do like 100 plays of that sublist, when you get into the 100th play, it would have been scanning 100 times, it still has a similar slow down in that the more times you play it the more work it has to do shifting the number down but its substantially faster and not doing stuff with disk speed(if it doesn get cached into system memory)

so pretty okay with that so far

an example file working on readme/documentation stuff currently
>./mcli
enter the list you want to edit or create
list: >cartoons2
current: db/cartoons2

total segments 0

enter the path/name of the list to add it 
or remove it if its listed above already
list: >ed edd n eddy
current: db/cartoons2
db/ed edd n eddy
segments 70

total segments 70

enter the path/name of the list to add it 
or remove it if its listed above already
list: >babyblues
current: db/cartoons2
db/ed edd n eddy
segments 70
db/babyblues
segments 13

total segments 83

enter the path/name of the list to add it 
or remove it if its listed above already
list: >commercial
current: db/cartoons2
db/ed edd n eddy
segments 70
db/babyblues
segments 13
db/commercial
segments 17

total segments 100

enter the path/name of the list to add it 
or remove it if its listed above already
list: >x
enter the list name to apply or remove a multiplier
list: >baby blues
enter the multiplier: >5
current: db/cartoons2
db/ed edd n eddy
segments 70
db/babyblues
segments 13 x5
db/commercial
segments 17

total segments 152

enter the path/name of the list to add it 
or remove it if its listed above already
list: >x
enter the list name to apply or remove a multiplier
list: >commercial
enter the multiplier: >4
current: db/cartoons2
db/ed edd n eddy
segments 70
db/babyblues
segments 13 x5
db/commercial
segments 17 x4

total segments 203

enter the path/name of the list to add it 
or remove it if its listed above already
list: >c
this may take a few moments

enter the list you want to edit or create
list: >exit

the text following > was user input, i just piped the output of the program to a file so had to backfill the input.

probably add so it outputs the total number instead of just the original number/multiplier(bit easier for some i guess/when you get into longer lists with many more sublists doing the math might become more tedious.

the mentioned readme lol
no arguments to run the cli/console to calculate lists
commands:
-h, --h, -help, --help or ?, to print this list
-p, --p, -play, or --play, to play the list named after
 -r, --r, -resume, or --resume, after the listname
  to resume play at the begining of the file
  instead of starting the next
-n, --n, -numberedplay, or --numberedplay to play given number of segments
 -rn, --rn, -resumen, or --resumen
  to begin at start of file on the numbered play
 -ro, --ro, -resumeo, or --resumeo
  to begin at start of file of the original list
 -rb, --rb, -resumeb, or --resumeb
  to begin at start of file for both plays
-start, or --start, to resume playing
 -r, --r, -resume, or --resume
  to begin at start of file
-stop, or --stop, to stop playing
-setpos, or --setpos, to set the position of what's to be played
-setpos-, ot --setpos-, to subtract the entered number
-setpos+, or --setpos+, to add the entered number
-shutdown, --shutdown, -halt, or --halt,
to shutdown mserv


for mkli:
proper syntax:
mkli filename(minus extension) path/directoryname

mkli is to make a list from a directory in the proper format. 
but if the directory is not entirelly made up of the media files you 
want to play, or the files arent named in a way to maintain specific order
(if desired)

then you will need to open that file and edit it, to remove the lines 
for the files you dont want to play or to ensure the order is as desired.

then you will need to change the number on the first line to reflect any 
changes made(if you removed files)

otherwise you start mserv, and then use mcli to issue commands or make 
the lists of sublists.

the format is such, that you can play the sublists directly aswell using 
the exact same method as to play the calculated lists.

if you modify a sublist, or calculated list you will need to recalculate 
every list that list is present in; to update them to reflect those 
changes.

the first parts being the output from doing -h or whatever, and then shittily explaining some stuff below havent finished it yet but think the formtting for the help output is mostly done..

oh yeah..

modified mkli, so theres an option to pipe the ls output to the end of the file, then a command to scan the file(to add the line number)

so you could fire multiple directories into a single file edit out anything you dont need, and then have it make the line number for you

2 Likes

dunno, pretty sure google already has one, what with the multiple lawsuits for packet filtering with the streetview vehicles or whatever lol.

not sure 100% the technical details.. but sounds like they were rolling around with cards in monitor mode or some shit, cause some stuff about filtering, but rebuttals of not connecting to anything/gathering anything that wasnt already openly available idk..

too lazy to look into it also, so just gonna assume the worst since its google and they lost more than once/were fined laughably small amounts of money

1 Like

think you replied to the wrong person

nah was replying to the stuff before it was deleted :p

1 Like

I actually could do with some help sorting out a few problems architecting the data-storage aspect of my backup software. I plan on doing chunk-based backups, so the data is deduplicated and only modified parts of the files are sent across the network.

I suppose I could design a container that can be up to 4GB in size and holds 1024 4MB chunks. If I could write them one after another, and I read in 4MB at a time and pull in the data and unserialize it as an object, I suppose it would work. I guess I'll have to prototype that to see...

a container like.. if you write a file with dd and mount it? least that part should be easy

1 Like

Eh not that sort of container.

I'm trying to do this all in code. Let me explain how I plan on doing the backup.

You have a client and server. Client reads in data in 4MB chunks, checksums it and compares it with the server's checksum database. If it needs to send the chunk, it sends it. The server then receives and stores the data in some sort of container and adds the hash to the array, references the chunk as part of X file, Y revision and Z index. (for restore order)

so bascially.. an internal version of that where your program is whats setting the file structure/'file system'
where its all contained inside the single file, where if you used dd it would just be its own filesystem,

where yoursetup youd have more control over how much filesystem data/block size and stuff?

1 Like

Kinda, well at first it's one file, but when data exceeds a tuned value, additional files will be required.

Yeah, I need a certain level of control over all this stuff.

well definitely sounds like the usual learning experience if no designed a filesystem whatever before.

but i think id probably opt for dd/just use regular filesystem for the ease of access to the files/data if something shat the bed. but then would be using a regular filesystem(for said manual access) so might be less efficient, or less options as far as combining files or whatever to fill blocks/better space utilization, and no idea what all your looking at as far as the network stuff so manually controlling the blocks probably is related to that.

i guess the concerns would look into would be like.. what exactly happens if you lost power during the time the program was running or something, like when its accessing the volume what happens if it doesnt 'close out' the file and system crashed or something. would want to make sure the least amount of data would become corrupt etc.

but then probably opportunities to learn about error correction bit rot etc

1 Like

Yeah, this is quite the endeavour. I'm not expecting to be able to do this easily, but it's definitely going to be cool. I intend to implement Facebook's zstd compression library to increase data transfer over the network.

I'm planning to implement checksumming and power-loss detection. If the software detects a power-loss, it will scrub the data. I've got no plans for parity as of yet, so data that fails the scrub will just be synchronized from backup target where the data's good (or the original PC). I'm not sure that parity would be required at the backup level, but I'm not sure. Thoughts?

I think I figured it out, or at least I'm getting closer.

file.storage -> use this to store the actual data that's being backed up. This can be varying sized chunks of data, up to 4MB chunks (obviously tunable, but for now I'm going to chunk at 4MB)

storage.json -> use this to store metadata to the storage file. Holds information such as chunk size, UUID, checksum, etc...

computer.json -> holds all the data relating to the computer, which files are backed up, what chunks (and in which order) hold the data for which versions, etc...

Hmmm. It could work, but I'm extremely tired, so I'm going to bed. I'll keep working on this in the morning.