Intel DC P3700 reliability - 5 years of Power_on_Hours (Firmware Upgrade?)

Greetings, fellow hoo-mens!

I’ve recently acquired an Intel DC P3700 400GB NVMe PCIe card, for effectively ~40€uros. Now, i’d like to figure out how killer this deal actually was.

smartctl -x ...

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        39 Celsius
Available Spare:                    100%
Available Spare Threshold:          10%
Percentage Used:                    1%
Data Units Read:                    74,250,989 [38.0 TB]
Data Units Written:                 42,248,413 [21.6 TB]
Host Read Commands:                 645,253,472
Host Write Commands:                1,389,027,929
Controller Busy Time:               21
Power Cycles:                       24
Power On Hours:                     44,627
Unsafe Shutdowns:                   16
Media and Data Integrity Errors:    0
Error Information Log Entries:      0

Those 21.6TB of rated 7+ PETAbyte is plenty of lifetime left.

But what about those 44.627 Power_on_Hours?

  1. When should i start worrying?
  2. Shouldn’t i have upgraded its firmware to the latest version?

The power on hours are significant to HDDs because they suffer physical wear i.e. they have moving parts. Should not matter that much for an SSD, so I’d say it was a pretty good deal!

1 Like

i agree

I mean electronic components like resistors also wear over time, but apart from that the NAND cells will be the largest contributor. Given the stats the NAND on the SSD should be fine.

i mean, Intel seems to be quite a sophisticated engineering company, and we’re talking about a product, designed for a high-stress DC (Data Centre) environment. I’d like to think that they’ve taken care of speccing and sourcing reliable components.

In particular, all that support circuitry, e.g. card-local power regulation&supply, those big caps - i think those are polymer caps, so they’re not as failure-prone as electrolytics.

If anyone had one of those cards fail - that’s a story i’d like to hear.

I happen to have several HDD’s in my array (RAID6, local file server/backup) that are rapidly approaching 70k power-on hours. That’s about 8 years. Still fine. The OS of that server ran on an early OCZ SSD that also accumulated some 70k hours before I swapped it for a newer one fairly recently (IIRC last year or early this year). That OCZ drive was giving me trouble, which was the reason I changed it. (FYI: 32GB Onyx SATA 2 model, still have it)

Prior to upgrading to an HPE ProLiant MicroServer Gen10 Plus, my OS, AS WELL AS the ZFS ZIL & L2ARC ran off of a single SK Hynix Canvas SL308 120GB (sata) - recently evaluating its SMART report, i was shocked about the TBW i’ve burned through in ~4 years - not much life left in that puppy.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.