Devember 2022 | Windows XP-style website(source code released)

Hello internet,

I know it’s a late entry, but seeing as the challenge is building something cool, and work on it at least a little bit everyday, and I’ve been accidentally doing that, I figure I might as well post about it here(again).

This year I’ll be working on my server tools. These tools started out as my LXC administration scripts, and still can do that, but their focus has shifted a little bit. The scripts now are capable of installing and setting up a a lot more stuff, for example it can setup a Debian installation suitable for container hosting in a libvirtd VM completely automated from a single configuration. They require a rewrite and proper release, and I’ve been slowly working toward that this December.

I’ve also played around with 98.css, XP.css and 7.css, and it’s a lot of fun, and my plan is to write a CGI front-end for my scripts that looks like a Windows XP desktop.
In fact, I’ve already started on that:


(Video is of the current state of the CSS test page)

All source code will be released under MIT license(once it’s at least somewhat working).

See you in the next post. Comments and critiques are welcome!

UPDATE: I have a wild, new idea and I’m not sure if it’s wise to extend the scope of this project that far. Let me know what you think! (Link to post follows)

UPDATE: I should probably post a link to the GitHub in the first post:

2 Likes

Progress report #1

Bash templates!

So, part of the software I’m writing for the Devember challenge is a CGI front-end for my bash scripts, naturally also written in bash.

WARNING: It’s a bad idea to allow user-input from a website to reach shell-scripts.

Now, bash is no stranger to string manipulation, so I figured this would be easy: everybody known the echo $STUFF $(command stuff)-type syntax, and it’s ubiquitous when programming bash.
We basically want that string interpolation of bash, but for a complete document, and without any header etc. in the document.
My first thought was to read each line, then do what’s basically an unproperly-quoted echo on purpose.
But this would have several problems: Characters like < might break the template if unquoted, which would be very annoying. Then I got another idea: heredoc strings! Most people are familiar with the concept:

cat << EOF_INDICATOR
(bunch of stuff)
EOF_INDICATOR

But I wanted clean templates, not bash scripts! Thankfully though, we can easily generate a bash script like this on the fly now, since we don’t need to quote anything unnecessarily.
Just bash, echo, eval and cat!

So, here’s the complete nasty little bash template engine:
(template_engine.sh)

# very simple template engine using heredoc and eval.
# WARNING: Be very careful with user-supplied data in these templates,
# they are just shell scripts after all!

# this helper function adds a cat << EOF like prefix/postfix to stdin
function templatize() {
	local my_heredoc_str="EOF_${RANDOM}_EOF"
	echo "cat << ${my_heredoc_str}"
	cat
	echo "${my_heredoc_str}"
}

# run the template specified by $1 using eval/herestring
function template_eval() {
	local template_path="${1}"
	shift
	eval "$(templatize < "${template_path}")"
}

Example template:
(template.html)

<h1>Hello $USER</h1>
Hello $USER! I'm $HOSTNAME! At my end, it's $(date)<br>
Foo is: $FOO

Example CGI script:
(cgi-bin/hello.sh)

#!/bin/bash
. "template_engine.sh"
FOO="hello world"
template_eval template.html

See you in the next progress report!

Progress report #2

Working Windows! (mostly)

Sometimes a short video can be more descriptive than many words:

Notes:

  • Not using any front-end framework, just plain Javascript.
  • Not using any back-end framework, just bash CGI scripts.
  • While the main start menu is shown using javascript, the sub-menu are only using CSS.
  • The background image is horrible and was created by me in Gimp2.
  • Window content is inside an iframe
    • The inside of each window iframe can set the window width, height, title and resizeable properties by including a magic comment in the markup, checked whenever the iframe loads a new page.
  • Currently only the window titlebars onmousemove event is checked, leading to strange behavior when the mouse is dragged fast.
  • The implementation is somehow both really boring and really ugly.

I hope the next posts will be a bit more technical. See you then!

this is awesome, not so sure I have heard anyone reference CGI in a decade though :wink: I honestly didn’t realize CGI scripts could be written in anything aside from perl and C in hindsight this was naive of me.

kind of a shame that developers have forgotten the art of CGI

Im in the middle of making a resume/profile site for myself that looks like DOS. I am tempted to change it to a Window Managed style instead. Looking at this makes me think it would be more intuitive.

https://web.holmfamily.xyz/

1 Like

Haha, thank you.
CGI is nice because it is very simple. Basically every application that can do stdio can do CGI. I’m sure it’s not well suited for large applications with strict performance requirements, but for a simple admin panel it surely is enough(and implementing an admin panel in the language most people administer their stuff with seemed like a good idea).
Right now, I’m working on implementing a simple read-only terminal emulator, with real-time command output, using the “Content-Type: text/event-stream” thing.
I might even implement bi-directional communication using fifo’s and two scripts, If I can be bothered.

in the off chance it helps, this exists… GitHub - jcubic/jquery.terminal: jQuery Terminal Emulator - JavaScript library for creating web-based terminals with custom commands might be smarter to use a fake terminal like this that wayonly commands you authorize are allowed… of course this can easily call a CGI endpoint to get real output if needed

Again, the assumption for this server panel is that whoever has access might as well have a shell account. So the “security” is not a problem, if I were to use this publicly-reachable, I would use e.g. HTTP basic auth and a nginx caching proxy in front of this(my actual plan is to have a busybox httpd listening only on 127.0.0.1, and use SSH forwarding, maybe even launch the web server directly via the same SSH command).
Also, I’m going to implement this without using any jquery/npm/whatever, since that’s honestly what I’m used to, and it’s so much smaller, in both complexity and delivered code size.
I’ve implemented a simple terminal emulator before. Shouldn’t be too difficult in Javascript, which is actually a lot like Lua, which is what I’m most used to.

But thank you for the suggestion anyways :stuck_out_tongue:

Progress report #3

I’ll be brief, since I’m busy with all that Christmas stuff.
I’m working on creating a CGI wrapper for fifo’s in order to implement a proper read-write terminal emulator in pure bash+js, with no external libraries. It’s fun :stuck_out_tongue:
Reading from a fifo is done using the Content-Type: text/event-stream/EventSource mechanism,
and writing using regular XHR.
I also have a lot of desktop improvements:

Happy Holidays!

Progress report #4

Nothing visual to show this time, just an update on what I’m working on.
I’m trying to implement a generic fifo-to-cgi bridge.
This would allow me to use fifos from js. There are 4 endpoints:
fifo_create.sh - create a fifo(basically just call mkfifo with correct argument)
fifo_read.sh - Continuously stream data from the fifo.
fifo_write.sh - Write some bytes to the fifo.
fifo_command.sh - Launch a command, with the fifo as it’s stdin/stdout.

All of that already works somewhat, but not perfectly. I get a lot of strange errors(e.g. sometimes fifo_write.sh will never return).

The source code, if somebody is interested(let me know if you find some bugs!):
fifo_create.sh

#!/bin/bash
set -euo pipefail
cd -P -- "$(dirname -- "${BASH_SOURCE[0]}")"/../..

. utils/cgi.sh

# only POST allowed
[ "${REQUEST_METHOD}" = "POST" ] || exit_with_status_message "405" "Method not allowed"

# get the $query_parms_arr array, already URL-decoded
parse_query_parms_list "$(</dev/stdin)"
parse_query_parms_arr true

# create two fifos for the process
base_path="$(mktemp -u -q)"
output_fifo_path="${base_path}.out"
input_fifo_path="${base_path}.in"
mkfifo -m600 "${output_fifo_path}"
mkfifo -m600 "${input_fifo_path}"

# indicate success
cat << EOF
Content-type: application/json

{
	"success": true,
	"base_path": "${base_path}"
}
EOF

fifo_read.sh

#!/bin/bash
set -euo pipefail
cd -P -- "$(dirname -- "${BASH_SOURCE[0]}")"/../..

. utils/cgi.sh
. utils/hex.sh

# only GET allowed
[ "${REQUEST_METHOD}" = "GET" ] || exit_with_status_message "405" "Method not allowed"

# this script reads from a previously created fifo byte-by-byte,
# and sends the hex encoded bytes to the browser in real time using the
# text/event-stream content-type.

# get the $query_parms_arr array, already URL-decoded
parse_query_parms_list "${QUERY_STRING}"
parse_query_parms_arr true

# get the fifo file descriptor argument
base_path="${query_parms_arr[base_path]:-}"
[ "${base_path}" = "" ] && exit_with_status_message "400" "Bad request"
output_fifo_path="${base_path}.out"
[ -p "${output_fifo_path}" ] || exit_with_status_message "400" "Bad request"

# respond so the browser understands more data is comming
echo "Content-Type: text/event-stream"
echo

# stream the output of the command as hex-encoded characters,
# seperated with an extra newline,
# untill EOF is encountered.
while true; do
	while IFS="" read -rN1 data; do
		hex_data="$(char_to_hex "${data}")"
		#echo "event: read"
		echo "data: { \"data\": \"${hex_data}\" }"
		#echo "data: ${hex_data}"
		echo
	done < "${output_fifo_path}"
done

fifo_write.sh

#!/bin/bash
set -euo pipefail
cd -P -- "$(dirname -- "${BASH_SOURCE[0]}")"/../..

. utils/cgi.sh

# only POST allowed
[ "${REQUEST_METHOD}" = "POST" ] || exit_with_status_message "405" "Method not allowed"

# this script writes a string to the specified fifo

# get the $query_parms_arr array, already URL-decoded
parse_query_parms_list "$(</dev/stdin)"
parse_query_parms_arr true

# get the fifo file descriptor argument
base_path="${query_parms_arr[base_path]:-}"
[ "${base_path}" = "" ] && exit_with_status_message "400" "Bad request: no base_path"
input_fifo_path="${base_path}.in"
[ -p "${input_fifo_path}" ] || exit_with_status_message "400" "Bad request: fifo not found"

# get the string to write to the fifo
data="${query_parms_arr[data]:-}"
[ "${data}" = "" ] && exit_with_status_message "400" "Bad request: no data"

# write data to the fifo
[ "${data}" = "newline" ] && data=$'\n'
echo -n "${data}" > "${input_fifo_path}"

# Respond with the ammount of bytes written to the browser
cat << EOF
Content-type: application/json

{
	"success": true,
	"written": ${#data}
}
EOF

fifo_command.sh

#!/bin/bash
set -euo pipefail
cd -P -- "$(dirname -- "${BASH_SOURCE[0]}")"/../..

. utils/cgi.sh

# only POST allowed
[ "${REQUEST_METHOD}" = "POST" ] || exit_with_status_message "405" "Method not allowed"

# get the $query_parms_arr array, already URL-decoded
parse_query_parms_list "$(</dev/stdin)"
parse_query_parms_arr true

# get the command to run from the query_parms_arr
command_str="${query_parms_arr[command_str]:-}"

# get the fifos
base_path="${query_parms_arr[base_path]:-}"
[ "${base_path}" = "" ] && exit_with_status_message "400" "Bad request"
input_fifo_path="${base_path}.in"
[ -p "${input_fifo_path}" ] || exit_with_status_message "400" "Bad request"
output_fifo_path="${base_path}.out"
[ -p "${output_fifo_path}" ] || exit_with_status_message "400" "Bad request"

# launch the command, with the fifos as it's stdin and stdout
#($command_str 2>&1 < "${input_fifo_path}") > "${output_fifo_path}" &
#"${input_fifo_path}" > /dev/zero &
#< "${input_fifo_path}" cat > "${output_fifo_path}" &
#while true; do
#	IFS="" read -rN1 data < "${input_fifo_path}"
#	printf "You typed: %q\n" "${data}" > "${output_fifo_path}"
#done &

# save the pid
#command_pid=$!
command_pid="0"
echo "${command_pid}" > "${base_path}.pid"

# indicate success
cat << EOF
Content-type: application/json

{
	"success": true,
	"pid": ${command_pid}
}
EOF

# close stdout to finish request #!/bin/bash
set -euo pipefail
cd -P -- "$(dirname -- "${BASH_SOURCE[0]}")"/../..

. utils/cgi.sh

# only POST allowed
[ "${REQUEST_METHOD}" = "POST" ] || exit_with_status_message "405" "Method not allowed"

# get the $query_parms_arr array, already URL-decoded
parse_query_parms_list "$(</dev/stdin)"
parse_query_parms_arr true

# get the command to run from the query_parms_arr
command_str="${query_parms_arr[command_str]:-}"

# get the fifos
base_path="${query_parms_arr[base_path]:-}"
[ "${base_path}" = "" ] && exit_with_status_message "400" "Bad request"
input_fifo_path="${base_path}.in"
[ -p "${input_fifo_path}" ] || exit_with_status_message "400" "Bad request"
output_fifo_path="${base_path}.out"
[ -p "${output_fifo_path}" ] || exit_with_status_message "400" "Bad request"

# launch the command, with the fifos as it's stdin and stdout
#($command_str 2>&1 < "${input_fifo_path}") > "${output_fifo_path}" &
#"${input_fifo_path}" > /dev/zero &
#< "${input_fifo_path}" cat > "${output_fifo_path}" &
#while true; do
#	IFS="" read -rN1 data < "${input_fifo_path}"
#	printf "You typed: %q\n" "${data}" > "${output_fifo_path}"
#done &

# save the pid
#command_pid=$!
command_pid="0"
echo "${command_pid}" > "${base_path}.pid"

# indicate success
cat << EOF
Content-type: application/json

{
	"success": true,
	"pid": ${command_pid}
}
EOF

# close stdout to finish request immediatly
exec >&-

On my TODO list:

  • File manager
  • Text editor
  • GUI script/“wizard” for managing containers
  • (once terminal is working) GUI script for SSH

Progress report #5

I haven’t gotten the terminal to work properly yet(I’m not giving up on that though!), but I thought I’d try to create some more small applications, so today I’ve created a simple text editor:

(saving also works. And yes, that is the entire server-side source code for the text editor :stuck_out_tongue: )

Progress report #6

Still working on the terminal stuff, might require another hack to get it working.
But I’ve worked on some more details, like implementing the basics of a file manager, and start menu icons(Well, technically emoji :P).

(Navigation doesn’t yet work in the file manager)

Progress report #7

Terminal is finally working.
The new solution is somehow simultaneously both more clean and less so than my previous one - Instead of simply running a command with it’s FD’s pointing to FIFO’s, I know use tmux to run the command, and do most of the terminal rendering, then simply interact with tmux from my CGI scripts.
Now, unfortunately that means I can’t use the previous method of streaming the data using the event-source mechanism, and have to request updates on a timer.
Fortunately, tmux seems to respond quick enough, so that my total request times are still low(~10ms).

So, I’ve had this strange idea that could be very cool.

Prelude

Really, I’ve been playing around with building desktop-like applications, backed by Bash CGI-scripts, as you can see in the previous posts. I’ve re-implemented serious parts of the typical I/O interactions of shell scripts generically in CGI(read/write files, fifos, spawn commands, etc.), and applications in general, in CGI. This all got started from upgrading my LXC scripts to work in the browser, and they setup Linux containers(and VM’s!) in an automated way.

I’d even like to host a version of my scripts publicly, but that wouldn’t be safe(It’s basically a control panel for a server, after all).

But writing a lot of JS, and thinking about what parts of an application belong in the browser vs. the CGI-scripts has gotten me wondering:

Maybe we can just get an entire Linux VM in the browser(which I know is possible, because it’s been done before, I might even use those projects), that then can host the Website and my scripts for a cool demo?

Doubts

This would leave you with a very sad Linux VM: Little memory, no networking, slow performance, etc. - Well, little memory and slow performance isn’t that bad, but no networking makes this VM kind of useless for anything serious. Also, it stops to exist once the user closes the browser tab. There are JS Linux VM’s out there that have network connectivity through a proxy and just limit you to one connection per IP and a few KB/s(but I really don’t want to become an ISP).

I think with some clever browser hacks, we can get some networking and some semblance of permanence.

The idea is this:
Visit a website, and gets a terminal to a Linux VM, running in Javascript in the local browser tab.
On tab close, pause/shutdown the VMs, then serialize the VMs, and send it peer-to-peer to another browser using WebRTC(peerjs).
This other browser could then continue to run the VM if trusted, or just store it’s state(possibly encrypted) for later.
You can return later and run your own VM again, or just access it remotely.
You could even network the running VMs together using the same WebRTC mechanism.

And as a side-effect(main-effect?) this could enable the completely free hosting of small (Linux) VMs that run in your browser, or in the browsers of others, with permanent(within limits) storage, and (limited) network access.

The idea is to always leave at least a single trusted, reachable browser-tab open somewhere, so your trusted VMs can have CPU time and internet access. You could easily provide that yourself from any browser tab(if possibly only for a limited time). And when you do that you could also host another VM for somebody else. Maybe you don’t want to be the exit-point of traffic for somebody else, but you could still host them if their traffic exits from another peer.
You could host from a desktop, phone, server or anything else that can open a webpage and has any internet connection(no dedicated IP needed, WebRTC + STUN/TURN).

Because basically all you need to do to participate is leave a tab open, and one can measure how long somebody has a tab open and how many VMs he hosts, one could calculate a leaderboard.

Maybe this could be something cool? What do you guys think? Could I get this done in the remainder of January? LET ME KNOW.

Progress report #8

I’m working on adding color support to the terminal. Fortunately, I don’t need to implement a complete ANSI terminal, just the SGR escape sequences, since cursor/reset/etc. is handled by tmux.

I’m also working on the browser-VM-thing as well, and I’ve decided to use v86 for now, since it has most of the functionality easily available in JS, as opposed to say TinyEMU, which uses C and trans-compilation, making interfacing easier. Also, it has good support for loading/saving state.

Last, I’d like to mention this bit of documentation:

documentation

v86/windows-xp.md at master · copy/v86 · GitHub
Yes, that’s right, you will be able to run actual Windows XP, inside my mock Windows XP, using Javascript :wink:
I probably can’t host the HDD images for licensing reasons. I’ll have too look into that(there are other projects that redistribute Windows XP-based images somehow).

Progress report #8, addendum
It turns out that it’s stupid easy to hack a simple DOS prompt together, that runs actual DOS(Well, FreeDOS anyway):

I’ll of course add a more complete virtual machine manager-thing in a Windows-XP style. Just thought I’d share this already :smiley:

1 Like

Progress report #9

I’ve noticed there are some minor problems with this project.
For example, I’ve been using xp.css to prototype the UI for this project, and like the idea of an Windows XP desktop-inspired website.
But I’ve noticed that xp.css uses extracted Windows fonts, which is nice since it looks like the original, but I don’t think I have the license to use these fonts on the web(and I think neither does xp.css, but that’s their problem). Microsoft does sell font licenses, including some of the ones used by xp.css, I think.

I also don’t like that xp.css uses scss, and thus requires an additional compiler and related stuff, or that it re-styles all elements, instead of a CSS class-based system. Or that it uses inline SVG for some icons.

So I’ve decided to re-implement xp.css(or at least the bits I need). It’s actually not that complicated, and in a day I’ve reconstructed most of the UI. Bonus: the icons and everything scales fine now.

Everything you see uses my custom XP-style CSS that I’ve called maXp.

Not everything is working as expected yet, but I’m making great progress.
I also want to finish up the terminal emulator and the LXC script UI.
And I want to build a static version of this that doesn’t reveal too much information for hosting on my website(updated via cron job).
Then it’s release time(or before the deadline, what ever comes first).

1 Like

Progress report #10

Too much stuff to do, still.

Here’s a short clip showing desktop icons, and the new terminal manager.

Some improvements to the Window decoration are also apparent.

Just a funny glitch I’ve encountered:

Still furiously working on the project…
(I hate that iframes loose their content when moving inside the DOM)

Okay, so I think I’ve encountered an actual chrome bug.
As you know, part of my devember project is implementing a window manager in Javascript.
To that end, I’ve come up with a design that creates an iframe called “make_new_win” from Javascript, which is embedded in a hidden window. When a hyperlink targets make_new_win(<a target=""make_new_win" ...) the make_new_win onload callback shows the window, renames the iframe, and creates a new hidden window called make_new_win with the same onload callback function(there is always a hidden window called make_new_win that can be targeted by a link, just which iframe differs). This is a nice solution since it also hides the window until the website is fully loaded(onload called), so you don’t see “flicker” or resizing while loading the page in a window.

The problem is that apparently the target-attribute’s resolved iframe gets cached somewhere, and I can’t seem to reset it. At least in chrome(-ium), it works fine in Firefox.
I have Chromium from Debian Version 109.0.5414.119 (Official Build) built on Debian bookworm/sid, running on Debian bookworm/sid (64-bit). I’ve also tried official Google chrome: Version 109.0.5414.119 (Official Build) (64-bit), same results.

I’ve attached a video of the strange behavior with a minimal demo(source below).

Minimal reproducer:

<!DOCTYPE html>

<h1>Iframe test</h1>
<p>
<a href="/static/html/xp/l1t.html" target="make_new_iframe">l1t</a>
<a href="/static/html/xp/run.html" target="make_new_iframe">run</a>
<a href="about:blank" target="make_new_iframe">blank</a>
</p>

<script>
function iframe_onload() {
	console.log("iframe_onload")
	let iframe_elem = document.getElementById("make_new_iframe")
	if (iframe_elem.contentDocument.location.href == "about:blank") {
		console.log("is blank")
		return
	}
	console.log("renaming")
	iframe_elem.removeAttribute("id");
	iframe_elem.removeAttribute("name");
	iframe_elem.removeAttribute("style");
	iframe_elem.onload = undefined
	document.body.appendChild(make_iframe())
}

function make_iframe() {
	console.log("make_iframe")
	let iframe_elem = document.createElement("iframe")
	iframe_elem.id = "make_new_iframe"
	iframe_elem.name = "make_new_iframe"
	iframe_elem.style = "border-color: #f33;"
	iframe_elem.onload = iframe_onload
	return iframe_elem
}


document.body.appendChild(make_iframe())
</script>

I’ve updated the demonstration page for the bug, and included a (very hacky) fix for the issue:

<!DOCTYPE html>
<p>
	<a href="/static/html/test1.html" target="make_new_iframe">test 1</a>
	<a href="/static/html/test2.html" target="make_new_iframe">test 2</a>
	<button onclick="fix()">fix(use body onclick callback)</button>
</p>
<style>
	[name="make_new_iframe"] {
		border-color: #f33;
	}
</style>
<script>
// on load the make_new_iframe iframe,
// rename and unset onload function,
// then create new iframe.
function iframe_onload() {
	console.log("iframe_onload")
	if (this.contentDocument.location.href == "about:blank") {
		console.log("is blank")
		return
	}
	console.log("renaming")
	this.removeAttribute("name");
	this.onload = undefined
	make_iframe()
}

// create an iframe with the name "make_new_iframe" and the iframe_onload() callback set.
function make_iframe() {
	console.log("make_iframe")
	let iframe_elem = document.createElement("iframe")
	iframe_elem.name = "make_new_iframe"
	iframe_elem.onload = iframe_onload
	document.body.appendChild(iframe_elem)
	return iframe_elem
}

// fix the problem by overriding the onclick behaviour of <a> elements with a target= attribute set.
function fix() {
	let previous_onclick = document.body.onclick
	document.body.onclick = function(e) {
		console.log("onclick",e.target.tagName)
		if ((e.target.tagName=="A") && (e.target.target) && (e.target.href) ) {
			console.log("target:",e.target.target)
			console.log("href:",e.target.href)
			let iframe_elem = document.getElementsByName(e.target.target)[0]
			iframe_elem.src = e.target.href
			return e.preventDefault()
		}
		if (previous_onclick) {
			previous_onclick(e)
		}
	}
}

make_iframe()
</script>

As you can see, the fix is basically implementing the expected behavior of an <a> element with the target attribute set in the simplest way possible. But it works, even on chrome. Incredibly annoying, here I was thinking I could focus on polishing my project instead of chasing strange chrome problems.
(And it’s not like this is some complicated thing or rare edge-case. It’s just a hyperlink with a target, pointing to a dynamically-created iframe)

EDIT: And it’s not like I’ve not tried other methods for archiving the same.
I have three other complete implementations of this that try to do the same.
One where when moving iframes in the DOM via Javascript they loose their content, which is why pre-creating the iframes is needed in the end.
And one where I’m using a single named iframe, and generating new iframes when navigated, but this results in two page loads instead of one. Or you can manually add onclick functions to links, which is what I’ve done before.

How it worked previously:
// Overwrite the default behaviour of links in the start-menu
// to open a window with an iframe instead.
for (let elem of start_menu_elem.getElementsByTagName("a")) {
	if (elem.href) {
		elem.onclick = function(e) {
			// shutdown button needs to redirect to about:blank not in an iframe
			if (!(elem.href == "about:blank")) {
				add_window(make_iframe_window(elem.innerHTML, elem.href, false, 640, 480))
				e.preventDefault()
			}
		}
	}
}
trying to move an iframe

function onload_make_new_win_elem() {
let navigated_loc = make_new_win_elem.contentWindow.location.href
if (navigated_loc == “about:blank”) {
return;
}
console.log(“navigated_loc”, navigated_loc)

// remove id and style from iframe
make_new_win_elem.hidden = false
make_new_win_elem.removeAttribute("name");

// create a window with pre-loaded iframe
add_window(make_iframe_window("title", undefined, false, 640, 480, make_new_win_elem))

// create a new iframe
make_new_win_elem = document.createElement("iframe")
make_new_win_elem.hidden = true
make_new_win_elem.name="make_new_win"
make_new_win_elem.onload = onload_make_new_win_elem
document.body.appendChild(make_new_win_elem)

}
make_new_win_elem.onload = onload_make_new_win_elem

Just creating new iframes onload(double load)

// works, but makes iframes load twice(annoying)
make_new_win_elem.onload = function(e) {
let navigated_loc = make_new_win_elem.contentWindow.location.href
console.log(“xxx onload “,navigated_loc)
if (navigated_loc !== “about:blank”) {
add_window(make_iframe_window(””, navigated_loc, false, 640, 480))
//make_new_win_elem.src = “about:blank”
}
e.preventDefault()
}