Extremely accurate system time: Chrony (not cron), PTP NIC clock and NIST (atomic clock based) time servers

Create an A record on your DNS server for time.apple.com to point to your local NTP server.

2 Likes

It doesnt i personally have a ntp and ptp server at home because I do scientific research with radio

Ultra precise time for a homelab that isnt managing events on the scale of facebook or financial trading is just plain autism. But hey look its a good way to learn. Thats what people should take away from it

Since I like tracking space objects and vehicles I tend to want precise time but even then sub ms is still splitting hairs

1 Like

lol probably

:joy: yeah i know that sounds rude but im being frank about it

Thing is though I dont discourage it. Its a good project for anyone wanting to understand one of the more basic parts of infrastructure. Personally ive been wanting to throw my weight behind encrypted time or authenticated time. Id like to be able to give some people ptp access or stratum 0/1 and the rest can operate off stratum 2 while being heavily rate limited

Windows is terrible. It has a config for 60 sec intervals but it enables bursting by default and slams out requests. Its so dumb

2 Likes

So I rigged up a script that moves files around, validates their checksums, and overly verbosely logs the data.

import os
import shutil
import logging
import time
import sqlite3
import threading
import hashlib
from colorama import init, Fore, Style
import configparser
import statistics

# Initialize colorama with autoreset
init(autoreset=True)

# Configure logger
logger = logging.getLogger("logger")
logger.setLevel(logging.INFO)
handler = logging.FileHandler("transfer.log")
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
    "%(asctime)s.%(msecs)03d - %(levelname)s - %(message)s", "%Y-%m-%d %H:%M:%S"
)
handler.setFormatter(formatter)
logger.addHandler(handler)
# Define the database name
DB_NAME = "telemetry.db"

# Create SQLite database and tables for file transfer info
conn = sqlite3.connect(DB_NAME)
cursor = conn.cursor()

# Define the set of expected columns
expected_columns = set(
    [
        "original_file_size",
        "file_name",
        "file_size",
        "transfer_time",
        "transfer_speed",
        "relative_transfer_rate",
        "source_checksum_performance",
        "destination_checksum_performance",
        "standard_deviation",
        "transfer_speed_std_dev",
        "relative_transfer_rate_std_dev",
    ]
)

# Check if the table exists
cursor.execute(
    """
    SELECT count(name) FROM sqlite_master WHERE type='table' AND name='file_transfer_info'
"""
)

# If the count is 1, then table exists
if cursor.fetchone()[0] == 1:
    print("Table exists.")
    # Check the existing columns
    cursor.execute(
        """
        PRAGMA table_info(file_transfer_info);
    """
    )
    existing_columns = {column[1] for column in cursor.fetchall()}
else:
    print("Table does not exist.")
    # Create the table with all columns
    cursor.execute(
        """
        CREATE TABLE file_transfer_info
        (original_file_size REAL, file_name TEXT, file_size REAL, transfer_time REAL, transfer_speed REAL, relative_transfer_rate REAL, source_checksum_performance REAL, destination_checksum_performance REAL, standard_deviation REAL, transfer_speed_std_dev REAL, relative_transfer_rate_std_dev REAL)
    """
    )
    conn.commit()
    # Since we just created the table with all columns, there are no missing columns
    existing_columns = expected_columns

# Find out which columns are missing
missing_columns = expected_columns - existing_columns

# Add any missing columns
for column in missing_columns:
    cursor.execute(
        f"""
        ALTER TABLE file_transfer_info
        ADD COLUMN {column} REAL;
    """
    )
conn.commit()


def print_info(message, color=None):
    if color:
        print(color + message + Style.RESET_ALL)
    else:
        print(message)


def _format_bytes_per_second(bytes_per_second):
    suffixes = ["B/s", "KB/s", "MB/s", "GB/s", "TB/s"]
    index = 0
    while bytes_per_second >= 1024 and index < len(suffixes) - 1:
        bytes_per_second /= 1024
        index += 1
    return f"{bytes_per_second:.2f} {suffixes[index]}"


def calculate_checksum(file_path, is_source=True):
    chunk_size = 4096
    hash_obj = hashlib.md5()
    total_size = os.path.getsize(file_path)

    if total_size == 0:
        print("File size is zero. Checksum performance cannot be calculated.")
        return "", 0, "", 0

    start_time = time.time()

    with open(file_path, "rb") as file:
        checksum_performances = []  # Reset progress list for each new file
        while True:
            data = file.read(chunk_size)
            if not data:
                break

            hash_obj.update(data)
            elapsed_time = max(time.time() - start_time, 0.001)
            progress_percentage = min((file.tell() / total_size) * 100, 100)

            if (
                not checksum_performances
                or progress_percentage - checksum_performances[-1] >= 5
            ):
                checksum_performances.append(progress_percentage)
                if is_source:
                    print_info(
                        f"Checksum Progress (Source): {progress_percentage:.2f}%",
                        color=Fore.CYAN,
                    )
                else:
                    print_info(
                        f"Checksum Progress (Destination): {progress_percentage:.2f}%",
                        color=Fore.CYAN,
                    )

    readable_hash = hash_obj.hexdigest()

    avg_checksum_progress = sum(checksum_performances) / len(checksum_performances)
    standard_deviation = (
        sum((x - avg_checksum_progress) ** 2 for x in checksum_performances)
        / len(checksum_performances)
    ) ** 0.5

    elapsed_time = max(time.time() - start_time, 0.001)
    bytes_processed_per_second = total_size / elapsed_time
    performance_representation = _format_bytes_per_second(bytes_processed_per_second)

    return (
        readable_hash,
        avg_checksum_progress,
        performance_representation,
        standard_deviation,
    )


def calculate_standard_deviation(data):
    return statistics.stdev(data) if len(data) > 1 else 0


def get_directory_input(prompt_message):
    print(Fore.YELLOW + prompt_message + Style.RESET_ALL)
    while True:
        dir_path = input()
        if os.path.isdir(dir_path):
            return dir_path
        else:
            print(Fore.RED + "Invalid directory, please try again." + Style.RESET_ALL)


def get_user_input(message):
    user_input = input(message).strip().lower()
    return user_input == "y" or user_input == "yes"


def save_config(config):
    with open("config.ini", "w") as configfile:
        config.write(configfile)


def load_config():
    config = configparser.ConfigParser()
    if os.path.exists("config.ini"):
        config.read("config.ini")
    else:
        config["Folders"] = {"source_dir": "", "destination_dir": ""}
    return config


def process_files(directory_path, destination_dir):
    print_info(f"Processing files in directory: {directory_path}")
    transfer_speeds = []  # List to store transfer speeds for each file
    relative_transfer_rates = []  # List to store relative transfer rates for each file

    for root, dirs, files in os.walk(directory_path):
        for file in files:
            file_path = os.path.join(root, file)
            file_size_mb = os.path.getsize(file_path) / (
                1024**2
            )  # Converting to megabytes

            # Skip files with size less than 1 MB
            if file_size_mb < 1:
                print_info(
                    f"Skipping file {file_path} with size {file_size_mb:.3f} MB",
                    color=Fore.YELLOW,
                )
                continue

            result = move_files(file_path, destination_dir)
            if result is not None:
                transfer_speed, relative_transfer_rate = result
                transfer_speeds.append(transfer_speed)
                relative_transfer_rates.append(relative_transfer_rate)

    # Calculate standard deviation for Transfer Speed and Relative Transfer Rate
    transfer_speed_std_dev = calculate_standard_deviation(transfer_speeds)
    relative_transfer_rate_std_dev = calculate_standard_deviation(
        relative_transfer_rates
    )


def move_files(file_path, destination_dir):
    print_info(f"Started moving file: {file_path}")
    os.makedirs(destination_dir, exist_ok=True)
    new_file_path = os.path.join(destination_dir, os.path.basename(file_path))
    original_checksum, _, source_checksum_performance, _ = calculate_checksum(file_path)

    print_info(f"Original Checksum: {original_checksum}", color=Fore.MAGENTA)

    start_time = time.perf_counter()  # Start timer

    # Original file size before moving
    original_file_size = os.path.getsize(file_path) / (
        1024**2
    )  # Converting to megabytes

    try:
        shutil.move(file_path, new_file_path)
    except Exception as e:
        logger.error(
            f"An error occurred while moving file {file_path} to {new_file_path}: {e}"
        )
        return None, None  # Return None values to indicate failure

    end_time = time.perf_counter()  # End timer

    # Updated file size after moving
    file_size = os.path.getsize(new_file_path) / (1024**2)  # Converting to megabytes

    (
        new_checksum,
        _,
        destination_checksum_performance,
        std_dev_destination,
    ) = calculate_checksum(new_file_path, is_source=False)

    print_info(f"New Checksum: {new_checksum}", color=Fore.MAGENTA)

    if original_checksum != new_checksum:
        logger.error(f"File {file_path} was corrupted during transfer.")
        return None, None  # Return None values to indicate failure

    transfer_time = end_time - start_time
    transfer_speed = (file_size * 8) / (transfer_time * 10**6)  # Calculating in Mbps
    relative_transfer_rate = file_size / (
        transfer_speed * 1024
    )  # Calculating in MB per Mbps

    # Calculate standard deviation for Transfer Speed and Relative Transfer Rate
    transfer_speeds = [transfer_speed]
    relative_transfer_rates = [relative_transfer_rate]
    transfer_speed_std_dev = calculate_standard_deviation(transfer_speeds)
    relative_transfer_rate_std_dev = calculate_standard_deviation(
        relative_transfer_rates
    )

    logger.info(
        f"Transfer time: {transfer_time:.3f} seconds. Transfer speed: {transfer_speed:.3f} Mbps. Relative transfer rate: {relative_transfer_rate:.3f} MB/Mbps"
    )
    logger.info(f"Transfer Speed Standard Deviation: {transfer_speed_std_dev:.2f} Mbps")
    logger.info(
        f"Relative Transfer Rate Standard Deviation: {relative_transfer_rate_std_dev:.2f} MB/Mbps"
    )

    # Insert the file transfer info into the SQLite database with the new columns for both source and destination checksum performance
    cursor.execute(
        """
    INSERT INTO file_transfer_info 
    VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    """,
        (
            original_file_size,
            os.path.basename(file_path),
            file_size,
            transfer_time,
            transfer_speed,
            relative_transfer_rate,
            source_checksum_performance,
            destination_checksum_performance,
            std_dev_destination,
            transfer_speed_std_dev,
            relative_transfer_rate_std_dev,
        ),
    )

    # Display additional telemetry data in the console
    print_info(f"Original File Size: {original_file_size:.3f} MB", color=Fore.YELLOW)
    print_info(f"File Size After Move: {file_size:.3f} MB", color=Fore.YELLOW)
    print_info(f"Transfer Time: {transfer_time:.3f} seconds", color=Fore.BLUE)
    print_info(f"Transfer Speed: {transfer_speed:.3f} Mbps", color=Fore.GREEN)
    print_info(
        f"Relative Transfer Rate: {relative_transfer_rate:.3f} MB/Mbps",
        color=Fore.MAGENTA,
    )
    
    conn.commit()


if __name__ == "__main__":
    # Load configuration from config.ini
    config = load_config()
    source_dir = config.get("Folders", "source_dir", fallback="")
    destination_dir = config.get("Folders", "destination_dir", fallback="")

    # Prompt for source and destination directories if they are not set
    if not source_dir or not destination_dir:
        print(
            Fore.YELLOW
            + "No source or destination folder found in the configuration."
            + Style.RESET_ALL
        )

        if get_user_input("Would you like to set the source folder? (y/n): "):
            source_dir = get_directory_input("Enter the source directory path: ")
            config["Folders"]["source_dir"] = source_dir

        if get_user_input("Would you like to set the destination folder? (y/n): "):
            destination_dir = get_directory_input(
                "Enter the destination directory path: "
            )
            config["Folders"]["destination_dir"] = destination_dir

        # Save the directories to config.ini
        with open("config.ini", "w") as configfile:
            config.write(configfile)

    print(Fore.GREEN + "Starting file processing..." + Style.RESET_ALL)
    logger.info("File processing started.")

    process_files(source_dir, destination_dir)

    print(Fore.GREEN + "File processing completed." + Style.RESET_ALL)
    logger.info("File processing completed.")
    conn.close()

original_file_size transfer_time transfer_speed relative_transfer_rate standard_deviation
8016.502166 0.65201813 0.094707408 79.59205688 28.83287348

I ran the same script again, while intentionally entering errors in time and stopping all time services on my PC.

The results? With the rate of errors in time and the delta of those errors, the performance is the EXACT same given the same size of 20 movie files.

original_file_size transfer_time transfer_speed relative_transfer_rate standard_deviation
8016.502166 0.65201813 0.094707408 79.59205688 28.83287348
import time
import random
from datetime import datetime
import os

def change_system_time():
    while True:
        try:
            # Get the current system time
            current_time = time.time()
            current_time_str = datetime.fromtimestamp(current_time).strftime('%Y-%m-%d %H:%M:%S')

            # Generate a random offset between -60 and 60 seconds (both positive and negative values)
            offset = random.randrange(-60, 61)
            
            # Calculate the new time with the random offset
            new_time = current_time + offset
            new_time_str = datetime.fromtimestamp(new_time).strftime('%m-%d-%y %H:%M:%S')
            
            # Print the current time and the new time with the offset
            print(f"Current time: {current_time_str}, Changing system time to: {new_time_str}")

            try:
                # Set the new system time (use the following line for Windows, you may need administrator privileges)
                os.system(f"date {new_time_str}")
                print("Time changed successfully!")
            except Exception as e:
                print(f"Failed to change the time. Error: {e}")
            
            # Generate a random sleep interval between 1 and 60 seconds
            sleep_interval = random.randint(1, 6)

            # Wait for the random sleep interval before changing the time again
            time.sleep(sleep_interval)
            
        except KeyboardInterrupt:
            print("Script stopped by the user.")
            break

if __name__ == "__main__":
    change_system_time()

So based on initial testing, Time doesn’t matter at these upper layers.

Thank you! This is an easy way to redirect time updates locally without setting up a whole separate firewall system! I’ll definitely be using this, it seems way more fun than blocking all time domains, but how does the device asking for the domain know what to look for?

Devices are hardcoded to query certain servers (time.apple.com for iDevices, time.windows.com for windows, ntp.org pool for linux installs), and since regular NTP isn’t authenticated, you can pull shit like this. :slight_smile:

1 Like

PSA:
These 10 dollar GPS/GLONASS sticks of this name and many clones:
Amazon.com: HiLetgo VK172 G-Mouse USB GPS/GLONASS USB GPS Receiver for Windows 10/8/7/VISTA/XP : Electronics

Kinda suck. I can’t seem to get a lock. Given the inherent limit of 25 feet ish for USB2, and walking around a bit with it on a laptop, basically makes them useless? What sucks is they have a red light thats always on says GPS. One would assume that single light being lit is an indicator of good behavior, as in GPS Lock?
This is at 9600 baud. ChatGPT says seeing all those nines is indicative of the device not having a lock.

$GPTXT,01,01,02,u-blox ag - www.u-blox.com*50
$GPTXT,01,01,02,HW  UBX-G70xx   00070000 *77
$GPTXT,01,01,02,ROM CORE 1.00 (59842) Jun 27 2012 17:43:52*59
$GPTXT,01,01,02,PROTVER 14.00*1E
$GPTXT,01,01,02,ANTSUPERV=AC SD PDoS SR*20
$GPTXT,01,01,02,ANTSTATUS=OK*3B
$GPTXT,01,01,02,LLC FFFFFFFF-FFFFFFFD-FFFFFFFF-FFFFFFFF-FFFFFFF9*53
$GPRMC,,V,,,,,,,,,,N*53
$GPVTG,,,,,,,,,N*30
$GPGGA,,,,,,0,00,99.99,,,,,,*48
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
$GPGLL,,,,,,V,N*64
$GPRMC,,V,,,,,,,,,,N*53
$GPVTG,,,,,,,,,N*30
$GPGGA,,,,,,0,00,99.99,,,,,,*48
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
$GPGLL,,,,,,V,N*64
$GPRMC,,V,,,,,,,,,,N*53
$GPVTG,,,,,,,,,N*30
$GPGGA,,,,,,0,00,99.99,,,,,,*48
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
$GPGLL,,,,,,V,N*64
$GPRMC,,V,,,,,,,,,,N*53
$GPVTG,,,,,,,,,N*30
$GPGGA,,,,,,0,00,99.99,,,,,,*48
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
$GPGSV,1,1,01,15,,,21*7F
$GPGLL,,,,,,V,N*64
$GPRMC,,V,,,,,,,,,,N*53
$GPVTG,,,,,,,,,N*30
$GPGGA,,,,,,0,00,99.99,,,,,,*48
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
$GPGSV,1,1,01,15,,,21*7F
$GPGLL,,,,,,V,N*64
$GPRMC,,V,,,,,,,,,,N*53
$GPVTG,,,,,,,,,N*30
$GPGGA,,,,,,0,00,99.99,,,,,,*48
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
$GPGSV,1,1,01,15,,,21*7F
$GPGLL,,,,,,V,N*64
$GPRMC,,V,,,,,,,,,,N*53
$GPVTG,,,,,,,,,N*30
$GPGGA,,,,,,0,00,99.99,,,,,,*48
$GPGSA,A,1,,,,,,,,,,,,,99.99,99.99,99.99*30
$GPGSV,1,1,01,15,,,20*7E
$GPGLL,,,,,,V,N*64
$GPRMC,,V,,,,,,,,,,N*53

Ah…
So I fixed that issue with a really long USB cable. I do now have GPS clock sync. More to come.
Finally, though, I have come to understand why none of this really matters for most use cases. RTT is calculated using the system clock of the same system twice, when the packet is sent and when it receives an acknowledgement. Since the other systems’ clocks never comes into play of that calculation, thats why we don’t see more widespread issues related to time.
Gawsh I am thick headed…

Chrony can be set to use your network interface card precise time protocol hardware clock, inside the nic chip, to calculate the drift, then it will synchronize that to the system clock.

ethtool -T