CCTV, SMR or CSR ? Why WD purple isn't SMR?

Hello :slight_smile:

I am starting to think about my CCTV system.
Sadly i have huge performance issue with my zoneminder VM, so i will go with a BlueIris dedicated build.

This mean dedicated HDD (won’t have that 10GB fiber to my nas) and i was wondering about SMR HDD

SMR is good at writing as long as you don’t want to rewrite in the middle right ?
With all the debacle about WD putting SMR where it doesn’t belong, i wonder why they didn’t try in WD Purple, dedicated to CCTV, which is continuous writing.

What did i miss ?
Is SMR a good or a bad idea for CCTV ?

Bad. Very bad.
Because rewriting the inner rings will then cause the drive to rewrite all the rings going outwards. In the time it takes, the buffer will be more than overflowing and frames will be lost.

3 Likes

Bad idea because no matter how big your space eventually runs out when continually recording. Which means it starts erasing and overwriting the sectors at the start, which in turn means having to rewrite almost the entire disk every time.

2 Likes

From what i understand, the “only” drawback will be that when the disk is full, i will continuously lost 1 ring under the one i am writing, and for CCTV it’s usually data way past what you wanted to keep.

So i am guessing there is no way to tell the drive to not ‘fix’ what he overwrite ?

No, because the drive does not know about “files”, it only knows sectors. So even if you “delete” a file, it will only remove the file pointer in the filesystem that points to those sectors. Once you start to write in those sectors again you run into the same issue, the drive doesn’t know that the sectors are useless now. In theory this can (probably?) be solved by filesystems that are aware of SMR drives and work around this, but I haven’t heard of one doing that.

2 Likes

that’s sad because it really look like just the right use-case for SMR, Write once, then overwrite without caring about what was already writen :frowning:

Well CSR then. Not the same price tho.

SMR generally is not in a good spot right now. In theory it is really good for archival type usage. i.e. write once, never change the data. Perfect for media storage… right? Well except that most media storages run in some kind of RAID and SMR doesn’t do well with RAIDs (or the other way round). So yeah, there’s that…

2 Likes

In theory, a “RAID-Aware”-SMR Drive could start writing in the center and work its way outwards instead of the “fastest ring first”-approach HDDs normaly take.

Actually many drive-managed SMR drives support the trim command. Weather the CCTV console has been updated in the last 10 years to support Trim is a whole different bag of worms.

There’s no real reason a new system couldn’t use SMR, especially with a host-managed drive where you could hook the user-space app to libzbc and split video files at zone sizes. The sequential workload is a great match, if you have an API to expose it.

Surely Host-Managed SMR would just be able to deal with more cameras’ streams before failing to write out the data?
For only a few cameras streams, a Drive-Managed drive should be able to buffer the flow enough to read in and write out, it’s just where the tipping point is? Because the workload is sustained for very long periods, it’s typically not ideal for the medium, unless the flow fits into the buffer?

SMR drives aren’t good for a write intensive operation. They’re great for price to capacity and archiving, but for an operation where the drives are going to be writing 24/7, stick with CMR drives.

Also, I like zoneminder but it was such a CPU hog. Not kept up on it though, so that may not be accurate

1 Like

I’m not saying you’re wrong, but SMR may be the future of what is left of rust, and I’m hoping for Host-Managed to be that future, because D-M sux worse

I truly believed it’s just a driver issue.
A smr drive host managed feel like it could easily continuously write, and automatically expire footage as they are overwrites (both the current ring and the ring+1 would get expired to not have issue)

it would work like a double sided tape that you can’t erase only one side, when time come to write to it again, you know both side will be lost but don’t care because it’s 4 month old anyway.

But since we are not there yet, and I am not the one who will dev something like that … wb purple it is. (or another brand i don’t care)

I know they do, but does any filesystem or cctv system actually support this? I know it does for SSDs, but does it use it on a regular hard drive if it’s there?

but does any filesystem or cctv system actually support this?

Most filesystems anymore support it. And there are various OS specific ways to check if it’s enabled. For Linux tuning, I’d make sure discards are issued as a scheduled batch process. Also a 5.0+ kernel and the mq_deadline i/o scheduler will greatly increase the sequentiality of issued disk commands.

Also remember the filesystem is interacting with the driver, and not the disk directly, so if the driver picks up and enables trim (which any ahci SATA controller should properly deal with) and the filesystem is mounted with discards or has a batch process scheduled, discards will happen.

Zones are fairly big (256M), and the drives are reasonably fast 200-250MB/s. Like any mechanical drive what will kill you is IOPS, but so long as you keep the sequence in fairly large I/O sizes (128k+) you should be able to use up to the full disk rate.

But for drive-managed SMR? 35-50 MB/s might be all you want to push, even with tuning. Even a perfectly sequential write will likely turn into a write-read-write, once firmware is done with it. Certainly not the best application but in a pinch you might get away with it.

Pretty much, and while you could do this and deal with a track width of bogus data, the firmware doesn’t let you. You’ve got to explicitly reset a zone before you can start writing at the beginning of the track again.

Though the ideal case would be to use libzbc to expose zones directly to your capture software assign a zone to each camera’s stream, and simply reset the oldest zone when you needed to open a new one. A 10TB drive is going to have something like 2000-4000 zones and many disks will support 128 open zones. So you could have 120* cameras at low bit rates, or even 50 cameras at a 4MB/s shouldn’t be a problem. No filesystem overhead, such a system could be very cost-effective.

*You would need need a few open zones or the sequential zones to track things like metadata, which zones, are recording what camera and when, alarm conditions, etc…

And of course you’d have to run the OS and application from a seperate disk w/ a real filessytem.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.