I am interested in setting up an in-house WordPress Testing/Development Environment and am seeking advice from the community on approaches for doing this. My goal is to be able to develop/debug sites in a safe testing environment, which simulates the actual production environment.
I'd prefer using Linux for this. So far my plan is to create a subnet and attach a dedicate local web server running Apache. I'd install php and any other dependencies needed for WordPress and go ahead and assign it a static local IP address. Lastly, I think I'd need to set up some kind of local DNS to direct all requests on the subnet to the local site rather than the actual deployed site.
I believe the above would allow me to, while on the testing subnet, type in the actual websites address and be directed to the local testing site rather than the deployed site found online.
Please share your advice on the above. Your insight and evaluations are greatly appreciated.
Build local laboratory subnet.
Two hardware servers.
One HW server is a FS with 3 VMs : One holds current live sites, second is Git server, third is deployment server.
Second HW server is a web server w/ vagrant.
I'm not very experienced with this sort of thing. But you might find vagrant https://www.vagrantup.com/ really useful. I liked it when playing around with making a website. Assuming you are okay with doing those thing you outline in your planning stage inside a vm.
@KYguyFromKY I watched an overview video of vagrant, and that seems like a solid solution for me. That could cover a lot of the technical work I was planning on doing. Thanks for the suggestion.
Thanks for the response. I'm reading through the documentation in your link right now.
I think your planning sounds good so far.
Why the separate subnet if it's just something that you're wanting to run local (not having Apache and your web server available outside of your network)? Maybe just run it in your regular in-home subnet.
I don't think you'll need a server running DNS unless you're wanting to access the site by it's hostname or FQDN.
Hope this helps.
My goal is to access the site from within the development environment by it's FQDN. I'm not sure if that's necessary or practical for a development environment (never built one or worked in an environment where they had one, so I'm kind of in the dark), but I want to simulate the actual deployed environment as closely as possible.
The reason for the subnet is so that if someone on my local network wanted to visit the site, they would go to the real site, not my development server's site.
Ah, I think you have me a bit confused now. Which may be my own doing...
You currently have a live website in production, hosted somewhere? And you want to simulate this on your home network so that you can test/debug?
I would just restrict the access to your local network and then when you're ready to push it out live, you can do that. Maybe I'm missing something...?
I try to keep my posts extremly brief, probably to my own detriment, so I may end up cutting out important information. Let me back up a little bit.
People often come to me and ask me to patch a part of their site, or add something to it. This is becoming more routine and I don't really want to do my debugging and testing by pushing out to live sites anymore.
Note: None of these site I host locally.
So, from within my LAN I'd like to set up a safe testing environment, separated from my current LAN, which, as best as possible, would simulate the real site. Everything from the FDQN to whatever else I can mimic. I figure to accomplish this the above plan seems to be the way to do it.
In my mind this is how it would work:
The subnet would be used to create a "laboratory environment", separated from my main network. The DNS server on the subnet would allow me to use the FDQN to access the testing site from within the subnet for testing/debugging (trying to mimic that real site as closely as possible).
However, I've never done this before so... well, I'm looking for advice from others on is this the correct way, or are their better ways?
Yeah, no worries, I'm trying to fully understand your situation too.
If you're having to fix multiple sites...for multiple people...often....
I think your plan sounds reasonable but my only concern is how you would get all of these sites (all of the HTML files, content, etc) over to your local network each time? FTP? SCP?
In any case - I've never done this, so hopefully someone who has will chime in to the conversation.
@Eden gave me a really great wget command which allows me to download an entire site's directory tree (HTML, CSS, etc) . Word Press is a little tricky because it heavily uses PHP and that runs on the server, so you only see is the HTML which it generates and not the PHP. (I may have to do a little more research on getting a complete Word Press source code). So my guess is that wget will only download the html generated by the PHP.
To get the site's complete source code from their server to my local lab environment, I might use a File Server running on this subnet which I can SSH into from my development machine and wget the real site and it will download it to a directory on the file server. However, even after all the work to developing the actual "laboratory environment," it seems like each time I switch between jobs (sites) there's going to be some configuration needed. Maybe I should also, look at nginx as well as Apache.
Getting the data from the FS to my development machine would probably be done via FTP or SCP.
Lastly, the FS might run three VMs. One would be a fs for the original wget downloads... aka the current working sites. The second one might be a git server where I can load my changes and have a good version history. The last VM might be used to store the production site and then FTP from this to the production site's server. Finally, have a stand alone web server running vagrant.
@Eden has probably made a good suggestion with making use of wget. I've never used it for this particular reason, so I'm not sure exactly how it would work.
Sounds like you're gonna get tired of fixing peoples' websites really quick!! Just kidding. Kind of. This stuff can get tedious - quick.
In for some other answers/input..
Dude, I'm going to have to get more computers ... : /
Personally, I would recommend setting up a few VMs (or Vagrant boxes) with common variations (PHP and/or MySQL versions, for example) then gain CLI access to the server. TAR up the WP installation, mysqldump the DB then install it on the applicable box.
For testing, do a find and replace on the SQL file (VI works brilliantly for this) to replace the client's domain with your local domain (e.g.: client.localhost.com), do your bit then reverse the process to put it all back again.
VIM command for SQL dump file (if needed):
If I understand correctly, you're thinking that I could just utilize XAMPP on my development machine. My only concern is how it affects the URL file path. Does that mean that at the end I'd need to use Vi to remove the "localhost" (ie localhost/thesite.org) from the source code.
In the mean time I'll throw together a quick simulation site and run some tests since I still need to look more into how to capture an entire wordpress site.
Finally, do you think a system like I suggested would be suited for developing web applications utilizing REST API's?