Software Developer Mega Thread

In real world practice, do you go for the simple+fairly efficient code or the very complex+ most efficient route?

I’m not sure those are mutually exclusive :wink:

Disclaimer: I am an ancillary to the development team. So when they want their code on AWS I’ll write a deployment in C# using the AWS SDK. I don’t actually write the enterprise applications.

That being said, I dislike things like Code Wars and HackerRank, because the impossible to read one liner is what gets you the most points. But I do like using simple ternary operators over longer if/else blocks.

Back in the day I would click the little light bulb in my IDE and take the recommendation. I would ctrl + z and ctrl + y several times to see what they changed. I would leave the most readable.

2 Likes

This is exactly where I’m at. I’ll use all the features of the language, assuming it leaves the code readable.

1 Like

Consider the following:

cat /var/run/dmesg.boot | grep '^CPU:' | cut -d '(' -f 2 | cut -d ')' -f 1
sysctl -n hw.clockrate

Naturally I prefer the latter.

max=$(echo $((max > i ? max : i)))
max=$((max > i ? max : i))

Again, shorter is better.

Ok so a real example then:

#
# This function continues to write to a filenum number of files into dirnum
# number of directories until either file_write returns an error or the
# maximum number of files per directory have been written.
#
# Usage:
# fill_fs [destdir] [dirnum] [filenum] [bytes] [num_writes] [data]
#
# Return value: 0 on success
#		non 0 on error
#
# Where :
#	destdir:    is the directory where everything is to be created under
#	dirnum:	    the maximum number of subdirectories to use, -1 no limit
#	filenum:    the maximum number of files per subdirectory
#	bytes:	    number of bytes to write
#	num_writes: number of types to write out bytes
#	data:	    the data that will be written
#
#	E.g.
#	file_fs /testdir 20 25 1024 256 0
#
# Note: bytes * num_writes equals the size of the testfile
#
function fill_fs # destdir dirnum filenum bytes num_writes data
{
	typeset destdir=${1:-$TESTDIR}
	typeset -i dirnum=${2:-50}
	typeset -i filenum=${3:-50}
	typeset -i bytes=${4:-8192}
	typeset -i num_writes=${5:-10240}
	typeset data=${6:-0}

	typeset -i odirnum=1
	typeset -i idirnum=0
	typeset -i fn=0
	typeset -i retval=0

	mkdir -p $destdir/$idirnum
	while (($odirnum > 0)); do
		if ((dirnum >= 0 && idirnum >= dirnum)); then
			odirnum=0
			break
		fi
		file_write -o create -f $destdir/$idirnum/$TESTFILE.$fn \
		    -b $bytes -c $num_writes -d $data
		retval=$?
		if (($retval != 0)); then
			odirnum=0
			break
		fi
		if (($fn >= $filenum)); then
			fn=0
			((idirnum = idirnum + 1))
			mkdir -p $destdir/$idirnum
		else
			((fn = fn + 1))
		fi
	done
	return $retval
}
#
# This function continues to write to a filenum number of files into dirnum
# number of directories until either file_write returns an error or the
# maximum number of files per directory have been written.
#
# Usage:
# fill_fs [destdir] [dirnum] [filenum] [bytes] [num_writes] [data]
#
# Return value: 0 on success
#		non 0 on error
#
# Where :
#	destdir:    is the directory where everything is to be created under
#	dirnum:	    the maximum number of subdirectories to use, -1 no limit
#	filenum:    the maximum number of files per subdirectory
#	bytes:	    number of bytes to write
#	num_writes: number of types to write out bytes
#	data:	    the data that will be written
#
#	E.g.
#	fill_fs /testdir 20 25 1024 256 0
#
# Note: bytes * num_writes equals the size of the testfile
#
function fill_fs # destdir dirnum filenum bytes num_writes data
{
	typeset destdir=${1:-$TESTDIR}
	typeset -i dirnum=${2:-50}
	typeset -i filenum=${3:-50}
	typeset -i bytes=${4:-8192}
	typeset -i num_writes=${5:-10240}
	typeset data=${6:-0}
	typeset f

	mkdir -p $destdir/{1..$dirnum}
	for f in $destdir/{1..$dirnum}/$TESTFILE{1..$filenum}; do
		file_write -o create -f $f -b $bytes -c $num_writes -d $data \
		|| return $?
	done
	return 0
}

Now it’s less obvious which will perform better. But it’s not obvious what the first version of the function is even doing. They’re basically identical, barring slightly different side effects (filenames changed slightly, directories existing in case of failure are different).

I prefer the second version because it is simpler. It has less code, fewer variables, less state to keep track of. There are a few more shell features being used, but if you understand the shell you can much more quickly get a grasp of what is going on.

But you can take it too far in either direction.

2 Likes

You can’t do that in any programming language that uses IEEE floating point.

The double is 64 bits and so is the uint64_t. Except that the double is actually 53 bits of mantissa and 11 bits of exponent. So once past 53 bits the value gets more and more imprecise. At UINT64_MAX it happens to have an error to the plus side. So when it converts back to uint64_t it doesn’t fit.

Oddly enough if I compile that with -O3 on GCC I get the correct answer because it replaces the entire program with precalculated constant values.

Less useful when the code is where I found it, validating user input :wink:

Luckily UINT64_MAX + 1 is a power of 2, which is exactly representable in an IEEE float. So the bounds check can be rewritten as something like

uint64_t val = (d < (double)u) ? (uint64_t)d : u;

…with a lengthy comment about why this is correct.

(and for the record, the actual code does not use a ternary, it just prints error messages and bails if the check fails)

In Python the walrus operator enters the chat.

:=

3 Likes

I was thinking about your weird code example again today, particularly why doh comes out with a weird value. By the way, I don’t get the same value for doh that you get.

The ternary expression in C is always strange but especially in this case because it should essentially be uint64_t doh = u; but it isn’t. And after thinking about it I realized the C value promotion rules always count floating point as better than integer so it “promotes” that u to a double in the ternary, then converts it into a uint64_t during assignment. But since it no longer fits into a uint64_t it isn’t equal to u.

Doing it like this: uint64_t doh = (d < u) ? (uint64_t)d : (uint64_t)u; with the explicit casts makes it come out as I’d expect.

Silly C promotion rules.

I know I’ve been overthinking this, but I had to figure it out and share. :slight_smile:

3 Likes

And another weird thing. I don’t know which compiler or C library you were using but on my glibc 2.30 using %.999999999 causes printf to just crash right in the middle of the format string. I changed to %.99 which works.

I guess you can cause an overflow if you use gigantic numbers in the format.

Worked on FreeBSD libc and CentOS 7 glibc for me :man_shrugging:

I did a pretty bad job converting real code into an example, but that is close to what the solution ended up being. Changing the comparison from <= to < is the key. Also critical was adding an explicit cast to double on u in the comparison and keeping the explicit cast to uint64_t on d (both of which I lost from my example - they’re back now).

uint64_t val = (d < (double)u) ? (uint64_t)d : u;

With your solution the code doesn’t actually compile with -Werror,-Wimplicit-int-float-conversion, which is what caught this error in the first place :wink:

uint64_t doh = (d < u) ? (uint64_t)d : (uint64_t)u;

Clang 10 warns that u is implicitly cast to double for the comparison, which causes it to have a different value outside the range of uint64_t. Hence changing the comparison to exclude that value. The explicit cast only tells the compiler to trust me.

1 Like

Do you folks run everything bare metal on your development rig or do you setup VMs for various services?

I have always installed virtual machines. Proxy, VM. MongoDB, VM. MySQL, VM. Different version of g++, VM(s). Node.js, VM… You get the idea.

A good friend of mine only does this when he’s simulating something. EVERYTHING is installed on his machine under /etc, /opt/ or /home/$USER/src.

I discovered that the three development teams I support all do this as well, with the exception of one person. His DBs are in Docker containers. But everything else: apps, web servers, tools, he has bare metal.

Am I doing it wrong? :wink:

I would just feel dirty if I install mongo and changed my source to read “localhost” instead of “ip”.

Curious to know what everyone else does.

1 Like

I’m no software dev but I usually have one Linux VM for testing stuff. It’s easy to just take snapshot every now and then, and if I manage to screw up something all I need to do is revert back to working snapshot if I can’t figure the problem out any other way.

And I started doing this when I got tired of cleaning up my own mess after I managed to break everything. :smile:

1 Like

Depends on the deployment strategy and what I’m developing.

Node/Vue? runs on metal.

Java Spring apps? containers.

Work stuff? containers.

I find containers to be a much faster alternative to VMs and they’re a bit less resource hungry solution as well, so I can deploy them on my laptop (16GB ram) or my desktop (64GB ram) with very little issue.

Set up .env files. then you can load like this (or so, forget how to read ENV from node):

var listen_ip = ENV['MONGO_LISTEN_IP'] | 'localhost'

And that also allows you to specify your listen ips more easily when spinning up your containers.

2 Likes

Professionally, I have always run VMs. Personally, I am just getting around to doing that as it is easier to build for ARM in an ARM VM than it is to run the cross compilation tools and then undo what ever it is that they have done to mess up the system when I want to update the base system or compile for say PPC.

I am on Poo-dozer personally so…

1 Like

I wouldn’t make it :wink:

Not using let in js :wink: Tsk, tsk, tsk.

I laughed because I know exactly what you’re talking about. I use LXC for some stuff but KVM is usually where everything else is. Maybe I’ll swap some ish over and see how that goes.

Mongo on LXC because installing Mongo on Arch took 5 hours and failed lmao. But it’s Arch btw.

:thonk: Is this a jab at AMD? If so lmao. If not :thonk:

Ohhh this sounds interesting. What are you building?

3 Likes
// save yourself some typing later
const DEBUG = process.env.NODE_ENV === 'development'


const listenIp = DEBUG ? 'localhost' : process.env.MONGO_LISTEN_IP

And don’t forget this.

4 Likes

Jails, or VMs for kernel development

2 Likes

const all the things :wink:

1 Like

Poo = Poor, my typing was off. But yes, a jab at AMD. Honestly Bulldozer is not as bad as people think when it comes to productivity. It has served me well all of these years but now AM3+ is showing its age. Especially with gaming. FX-6300

In regards to ARM and PPC, I am trying to pickup some of the slack with PS3 linux using Rene Rebe’s t2sde. That and I am working on trying to bring Modern Linux to my HP TouchPad. Basically, I am learning how to make Linux from scratch with no distro fixes.

1 Like