Show off your CHIA mining rigs!

Well I am experimenting with it as I should get at least 42 plots a day with this so this is my latest try to see if it makes it better or worse. I have all nvme formatted as XFS with CRC disabled and mounted with xfs noatime,nodiratime,discard,defaults 0 0

chia_location: /home/chia/chia-blockchain/venv/bin/chia
manager:
  check_interval: 60
  log_level: ERROR
log:
  folder_path: /var/log/chia
view:
  check_interval: 60
  datetime_format: "%Y-%m-%d %H:%M:%S"
  include_seconds_for_phase: false
  include_drive_info: false
  include_cpu: true
  include_ram: true
  include_plot_stats: true
notifications:
  notify_discord: false
  discord_webhook_url: https://discord.com/api/webhooks/0000000000000000/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  notify_sound: false
  song: audio.mp3
  notify_pushover: false
  pushover_user_key: xx
  pushover_api_key: xx
  notify_twilio: false
  twilio_account_sid: xxxxx
  twilio_auth_token: xxxxx
  twilio_from_phone: +1234657890
  twilio_to_phone: +1234657890
instrumentation:
  prometheus_enabled: false
  prometheus_port: 9090
progress:
  phase1_line_end: 802
  phase2_line_end: 835
  phase3_line_end: 2475
  phase4_line_end: 2621
  phase1_weight: 27.64
  phase2_weight: 26.15
  phase3_weight: 43.21
  phase4_weight: 3.0
global:
  max_concurrent: 48
  max_for_phase_1 : 16
  minimum_minutes_between_jobs: 0

jobs:
  - name: nvme001
    max_plots: 999
    farmer_public_key: <>
    pool_public_key: <>
    temporary_directory: /mnt/datanvme/001
    temporary2_directory:
    destination_directory: /mnt/chia_tmp_dest
    size: 32
    bitfield: true
    threads: 16
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 2
    max_concurrent_with_start_early: 6
    initial_delay_minutes: 0
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: true
    cpu_affinity: [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ]

  - name: nvme002
    max_plots: 999
    farmer_public_key: <>
    pool_public_key: <>
    temporary_directory: /mnt/datanvme/002
    temporary2_directory:
    destination_directory: /mnt/chia_tmp_dest
    size: 32
    bitfield: true
    threads: 16
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 2
    max_concurrent_with_start_early: 6
    initial_delay_minutes: 0
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: true
    cpu_affinity: [ 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31 ]

  - name: nvme003
    max_plots: 999
    farmer_public_key: <>
    pool_public_key: <>
    temporary_directory: /mnt/datanvme/003
    temporary2_directory:
    destination_directory: /mnt/chia_tmp_dest
    size: 32
    bitfield: true
    threads: 16
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 2
    max_concurrent_with_start_early: 6
    initial_delay_minutes: 0
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: true
    cpu_affinity: [ 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47 ]

  - name: nvme004
    max_plots: 999
    farmer_public_key: <>
    pool_public_key: <>
    temporary_directory: /mnt/datanvme/004
    temporary2_directory:
    destination_directory: /mnt/chia_tmp_dest
    size: 32
    bitfield: true
    threads: 16
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 2
    max_concurrent_with_start_early: 6
    initial_delay_minutes: 0
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: true
    cpu_affinity: [ 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63 ]

  - name: nvme005
    max_plots: 999
    farmer_public_key: <>
    pool_public_key: <>
    temporary_directory: /mnt/datanvme/005
    temporary2_directory:
    destination_directory: /mnt/chia_tmp_dest
    size: 32
    bitfield: true
    threads: 16
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 2
    max_concurrent_with_start_early: 6
    initial_delay_minutes: 0
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: true
    cpu_affinity: [ 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79 ]

  - name: nvme006
    max_plots: 999
    farmer_public_key: <>
    pool_public_key: <>
    temporary_directory: /mnt/datanvme/006
    temporary2_directory:
    destination_directory: /mnt/chia_tmp_dest
    size: 32
    bitfield: true
    threads: 16
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 2
    max_concurrent_with_start_early: 6
    initial_delay_minutes: 0
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: true
    cpu_affinity: [ 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95 ]

  - name: nvme007
    max_plots: 999
    farmer_public_key: <>
    pool_public_key: <>
    temporary_directory: /mnt/datanvme/007
    temporary2_directory:
    destination_directory: /mnt/chia_tmp_dest
    size: 32
    bitfield: true
    threads: 16
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 2
    max_concurrent_with_start_early: 6
    initial_delay_minutes: 0
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: true
    cpu_affinity: [ 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ]

  - name: nvme008
    max_plots: 999
    farmer_public_key: <>
    pool_public_key: <>
    temporary_directory: /mnt/datanvme/008
    temporary2_directory:
    destination_directory: /mnt/chia_tmp_dest
    size: 32
    bitfield: true
    threads: 16
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 2
    max_concurrent_with_start_early: 6
    initial_delay_minutes: 0
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: true
    cpu_affinity: [ 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127 ]

In previous run I got only 28 so I am testing it with less parallel processes as it was taking 24h to complete the plot with all phases while running a lot of them in parallel. This is what I have so far for setup above:

=========================================================================================================================
num     job     k    plot_id    pid           start          elapsed_time   phase   phase_times   progress   temp_size
=========================================================================================================================
1     nvme002   32   c71d717   66236   2021-05-23 18:08:39   03:39:11       2       02:04         47.45%     195 GiB  
2     nvme003   32   1893a06   66915   2021-05-23 18:11:46   03:36:04       2       02:09         43.49%     212 GiB  
3     nvme004   32   96bc708   66916   2021-05-23 18:11:46   03:36:04       2       02:10         43.49%     212 GiB  
4     nvme005   32   aae1409   66917   2021-05-23 18:11:46   03:36:04       2       02:02         45.07%     195 GiB  
5     nvme006   32   7f4544d   66918   2021-05-23 18:11:46   03:36:04       2       02:07         45.07%     195 GiB  
6     nvme007   32   c7f5e91   66919   2021-05-23 18:11:46   03:36:04       2       02:04         45.07%     195 GiB  
7     nvme008   32   fddc525   66920   2021-05-23 18:11:46   03:36:04       2       02:07         45.07%     195 GiB  
8     nvme001   32   33915f5   78328   2021-05-23 19:07:50   02:40:00       2       02:33         33.19%     158 GiB  
9     nvme002   32   1e80e68   79413   2021-05-23 19:13:50   02:34:00       1                     26.74%     160 GiB  
10    nvme003   32   66a7b1b   79987   2021-05-23 19:16:50   02:30:59       1                     25.54%     163 GiB  
11    nvme004   32   c21b1c2   79988   2021-05-23 19:16:50   02:30:59       1                     25.50%     163 GiB  
12    nvme005   32   3a9e2af   79989   2021-05-23 19:16:50   02:30:59       1                     25.78%     162 GiB  
13    nvme006   32   21463ff   79990   2021-05-23 19:16:50   02:30:59       1                     25.88%     162 GiB  
14    nvme007   32   0e75ba5   79991   2021-05-23 19:16:50   02:30:59       1                     25.54%     162 GiB  
15    nvme008   32   79e2e13   79992   2021-05-23 19:16:50   02:30:59       1                     25.40%     163 GiB  
16    nvme001   32   189ef25   91004   2021-05-23 20:12:55   01:34:55       1                     16.89%     162 GiB  
=========================================================================================================================
Manager Status: Running

CPU Usage: 19.1%
RAM Usage: 32.29/125.69GiB(26.7%)

Plots Completed Yesterday: 18
Plots Completed Today: 28

I’ve very new to this, but would it be better to have 8 running at the same time, but with a staggered start? It looks like you have 8 in parallel, but all started at the same time. I suspect that you are distributing the writes to different drives/folders, so I guess it comes down to where the bottleneck is for your system, although I don’t see one.

It looks like you have 128 buckets, 16 threads, and 3390 for memory. Memory and buckets looks OK to me, but I thought there were diminishing returns past 4 or 6 threads?

These are 8 jobs for 8 separate NVMEs more parallel jobs will start after stagger + when they get out of phase 1. At the end I will expect max 48 jobs running at the same time but they will not start all at once.

I set up a chia farming rig! It’s all parts that I already had, some of it I was going to give away, but got lazy.

I didn’t like the idea of burning up SSDs, so I actually mine on HDDs. It’s not much slower than SSD farming so maybe my bottle neck is elsewhere? I’ve also been testing plots only 1 or 2 at a time and staggered, so maybe with more plots at the same time, I’ll find the bottleneck. At any rate, I have 6 2tb HDDs in mdadm raid 0 (yes raid 0!!) to speed things up.

I am looking to automate it, and right now, the path of least resistance is good ol’ fashioned bash scripting and cron jobs. I think swar might be a good solution but… reasons and lazy.

Now if I can just figure out how to add a hot spare to my raid 0 array…

image
am i doing it rite?

7 Likes

A little off topic but do they have 3D models to print your own chia pet?

2 Likes

You can partition your hard drives for speed!

I would be interested to know the difference in speed
Skip to about 20 seconds in

I actually finally started plotting. I have an 480GB SATA SSD as temp drive but plotting is mighty slow. Let it ran over night and the entire day and it has just finished 2 k=32 sized plots. Is that normal?

1 Like

I think 8-12 hours per plot is typical, but it depends on ram, CPU, storage, stagger, etc.

For a single plot, I can finish in just over 5 hours with HDDs in raid 0.

When I plot in parallel, the time extends to 7-9+ hours, although I am still fine tuning how many parallel plots I have. The most recent plot finished in 6.5 hours, but it was only second in the queue today, so the queue is still filling up. I’m hoping to finish 8 or 9 plots per day. I think I could do more with faster storage, but I don’t want to burn out SSDs for this.

I’ll post some metrics in a second…

2 Likes

@anon86748826 This one is a single plot that finished in just under 5 hours.

Time for phase 1 = 6751.227 seconds. CPU (188.850%) Fri May 28 21:37:48 2021
Time for phase 2 = 3688.340 seconds. CPU (99.610%) Fri May 28 22:39:16 2021
Time for phase 3 = 6875.443 seconds. CPU (98.650%) Sat May 29 00:33:52 2021
Time for phase 4 = 449.356 seconds. CPU (99.630%) Sat May 29 00:41:21 2021
Total time = 17764.369 seconds. CPU (133.150%) Sat May 29 00:41:21 2021

This is a parallel plot that has other plots running, staggered every 2 hours. Finished in about 6.5 hours. I expect it will get a little longer once the queue hits a steady state.

Time for phase 1 = 8078.738 seconds. CPU (188.570%) Sun May 30 08:15:42 2021
Time for phase 2 = 4800.903 seconds. CPU (90.540%) Sun May 30 09:35:43 2021
Time for phase 3 = 10107.897 seconds. CPU (85.430%) Sun May 30 12:24:11 2021
Time for phase 4 = 712.386 seconds. CPU (83.200%) Sun May 30 12:36:03 2021
Total time = 23699.925 seconds. CPU (121.560%) Sun May 30 12:36:03 2021

I have 6 old HDDs (4+ years old) in mdadm raid0, ext4, with journal turned off, so my temp plotting space is slightly faster than an SSD, but bogs down with parallel plots. CPU is an older threadripper, so I think my bottleneck is drive read/write speed.

FYI, I did some checks on drive writes, and it seems to be about 1.4TB written per plot, although it might be higher, depending on parameters. It doesn’t seem worth it to me to burn out solid state drives to mine chia.

2 Likes

Here is the disk throughput and utilization. The first sets of blobs are single plots. The bigger blob on the right is when I started plotting in parallel.

2 Likes

Update on the chia farming…

I was running this with a new plot starting every 2 hours, but it would saturate the drive and the utilization would get too high, so the plots would take longer and longer.

Starting a new plot every 3 hours seems to hit the sweet spot- Plots finish in 6 hours and I only have 2 running concurrently, which my HDD raid array can handle.

On the left side, you can see the higher utilization, while on the right, it’s more moderate/sustainable. So right now, I can do 8 plots per day on old HDDs. I could probably do more on SSDs, but I don’t want to invest anything more than time.

On a sad note, I have 2 smart errors on one of the drives, so I might have to replace a drive eventually. I do have a few more old hard drives, but they are even older than what I am using now. I could just decrease the array size as a temporary measure until I have a better window to physically replace the drive.

image

2 Likes

up to 16.012 EiB

1 Like

Has anyone seen the Sabrent Plotripper NVMe drives, advertised as Coming SoonTM and having 54,000 TBW endurance for the Pro 2TB? Chunky… Price TBC.

Jeez, your server is bigger than Wendells…