The small linux problem thread

Recent is ordered by frequency of use and All is ordered by application name, alphabetically.

You can apparently do things with application directories as seen in /usr/share/desktop-directories/. Probably need to read up on the XDG specs.

This seems to be working so far. Thanks!

1 Like

Iā€™m trying to get my NFS server to have more throughput. Right now I can transfer about I Gigabyte per minute. I have file transfers that can get hundreds of GB in size.

Iā€™m using CentOS 7 as the NFS server. Its a VM and I have put it on more performant datastores, which helped out a bit, but not nearly enough.

At this point Iā€™m thinking about teaming nics to see if I can increase the throughput. Iā€™m using the ā€œdefaultsā€ in addition to NFSv4 when I mount the NFS share.

Can anyone think of some things to check before assuming I need more NICs?

1 Like

What kind of workload is the NFS traffic? Large sequential, small random or mixed?

As a technical term Iā€™m not sure what sequential/mixed means in this case, but the case Iā€™m trying to solve involves getting a 30GB - 700GB file to transfer. Iā€™m assuming this would be sequential.

Also, itā€™s already compressed, so Iā€™m not that lucky to just compress the data before the transfer.

I notice in the transfer using:

watch -n1 ls -l #assume I'm in mount directory on the NFS server

In that directory, there only file present is the one being uploaded.

The ā€œtotalā€ field is sometime 3 - 15 gb ā€œaheadā€ of the file size.

image

In this image you can see the total field and the file size. They are equal here because I added ā€œno_wdelayā€ to try to create synchronous writes. However, before adding this there was a significant gap between ā€œtotalā€ and filesize.

Iā€™m assuming there is some degree of windowing going on where the total shows the amount loaded to memory and the file size is what is actually written to disk. When the file size becomes the size of the totalā€¦ the total jumps up and the file size slowly climbs.

This lead me to think there is an issue with memory -> disk writes on the local machine. However, Iā€™m able to write a 30GB file locally on the NFS server to that directory using dd in about 7 minutes. So Iā€™m not certain that is the issue. But given there is windowing, maybe that is actually slow.

Using:

free -h #on the NFS server

I noticed the ā€œfreeā€ memory began to be consumed during the transfer and the buffer/cache memory increased until free memory was down to about 100mb. At this point I increased memory, and further tests showed this helped improve time. This may be a coincidence, but my guess is that this is going to have limited returns the more I add, especially as the file size transfer grows.

The next thing I did was create a NFS server on a datastore with better performance. This also cut down on the time. Itā€™s possible, that increasing RAM on this machine might cut the time down more.

To me the equation for this operation looks something like this:

totalTime = processor{client} + RAM{client} + (numberOfNetworkTransfers * networkLatency) + processor{server} + RAM{server}+ disk{server}. 

The numberOfNetworkTransfers could be cut down by increasing amount of data sent during each transfer. Iā€™m unable to cut down on RAM->Disk Writes any further, so Iā€™m assuming here that I need to increase transfer speed or reduce network latency. Thatā€™s why Iā€™m thinking NIC teaming.

But, I could be missing something.

Whatā€™s the storage stack? Md/lvm/ext4?

Is this 10GBE?

Try jumbo frames and nfsv3 with tcp,async,rwsize=65536 (On the client)

Yeah 10GBE. Client BTFRFS, Server XFS.

We should be able to jumbo frames on the network, but donā€™t I have to change the MTUā€™s on the client and servers NICs to handle that?

1 Like

Yes, set client and server to 9000. The switches usually include the header so itā€™s 9200ish. Just set it as high as it will go. On the switch itā€™s just a maximum that it will tolerate since itā€™s obviously not generating traffic. Recommend a dedicated vlan though to avoid mtu mixing. I assume this is a dedicated physical interface separate from the default gateway (on both server and client)?

Client and Server have single NIC. Switch is set to 9000.

incoming meme

Back To Problem

So only a single NIC on each machine. And just a heads up these are VMs.

1 Like

Ah ok. Go ahead and set client and server to 9000. Itā€™s going to create some load somewhere on your network when normal traffic hits a standard mtu but you can figure out if thatā€™s a reasonable compromise or if it matters at all.

Thanks for the advice, Iā€™m going to run these test in the morning. Iā€™ll reply to let you know how the tests turn out.

Thanks @oO.o

cotton

1 Like

That helps tremendously. The main difficulty is that I have really old /home directory with a ton of obsolete icons and organizing it is a nightmare.

Iā€™ve found that like iOS, GNOME lets you drop one icon on top of another to create a ā€œgroupā€, for lack of a better word. That helps tooā€¦

1 Like

Does anyone know off-hand how portable the binaries produced by tic are? Wondering if I can dump them into my dot file repo or if they need to be compiled for each machine.

Not a big deal either way, I just want to have italics inside of tmux/vim and it requires adding some things to the screen-256color terminfo.

Shadowbane back with another Linux problem. I am trying to ssh into a Ubuntu Server virtual machine from my Kubuntu 20.4.1 Linux desktop. I am trying to install and set up a Pi-Hole Where I am stuck because I canā€™t log into my created Ubuntu Server VM using SSH. I did install the SSH server program on my Ubuntu Server. Anyone have any ideas of why the Ubuntu Server is refusing the connection from my Kubuntu desktop.

virtualbox Kernel driver not installed (rc=-1908)

Gets me everytime, i have removed and installed the dkmsā€™sā€™s and booted to a 5.8 kernel. No avail.

How do i fix this without rebooting?

K

Iā€™m not certain but I donā€™t believe the format has changed in ages. However, I wouldnā€™t be too surprised if there were problems with endian order, or word sizes.

man 5 terminfo has this:

It is not wise to count on portability of binary terminfo entries between commercial UNIX versions. The problem is that there are at least two versions of terminfo (under HP-UX and AIX) which diverged from System V terminfo after SVr1, and have added extension capabilities to the string table that (in the binary format) collide with System V and XSI Curses extensions.

So it seems that they only worried about differences between vendors, not individual OS versions.

1 Like

Yeah. Iā€™d like this to be compatible between macOS and Linux so I think Iā€™ll play it safe and compile it per machine. Might just put it in the zshrc and compile it if itā€™s missing.

I assume itā€™s running? Check the firewall?

I think I found the answer to my problem. The Ubuntu server program installs keys by default and of coarse I donā€™t have a key. I am trying an earlier version of Ubuntu Server and see if that works.

As far as I know, you canā€™t fix it. It would help if you mention what version of Linux you are using. I have found Virtualboxā€™s manual to be very helpful. Here is the Link to Virtual boxā€™s user manual