Starting again

Date: Oct 12th 2019

I do occasionally get a bit annoyed about the cruft on my main work system. It is to be expected really, installing packages and apps that one needs as one goes along. I've just purged a bunch of stuff from it now. But I want to re-install really. Also if fortune smiles I may get a new laptop :D :/

Anyway I'm a bit unsure about using a fully fledged Desktop Environment, like GNOME, works nicely and looks good but I've been running hard into swap which freezes the system totally and well, tiles just do it for me

The weird thing is how using i3 seems to kill all the other fancy stuff running - like lockscreens and screen brightness, so over the years I've added a few scripts to the install - keyboard shortcuts, even lid-down actions, which are nullified (probably from relying on GDM to cover that stuff.

I think I'll be using basic Debian Buster and install Xorg and i3

Backup

I need to make a backup of stuff in /home readily available to grab - mostly ssh keys and odd config bits. I can get by with minimal stuff really, just nice to have the cumforts.

  • My SSH keys (well, one. Might have to regen them for my local devices)
  • Dotfiles (thank you github) -- git clone https://github.com/wuxmedia/dotfiles.git
  • Weird lid-down stuff
  • lockscreen with xautolock calling i3lock

MongoDB set up admin and all nice and secure

Date: 2019-05-03

I've not used MongoDB, not used JavaScript in any meaningful way or deep way, I recently had to admin a fresh mongo setup, with secure Admin and users, set up with SSL.

To set up secure auth on a system you need to make an admin user, so you can then create the other users and databases, before you set up the auth default is unauthed)

MongoDB Enterprise > use admin                                                            
switched to db admin                                                                                 
MongoDB Enterprise > db.createUser({ user: "admin" , pwd: "admin", roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase", "userAdmin","readWrite"]})

Then when that is done, log out and edit the /etc/mongod.conf:

net:
[...]
security:                
  authorization: enabled

I'm not entirely sure if the conf requires indentations but I've put them in anyway.

Restart the service: systemctl restart mongod

You'll need to log in with new options: mongo --username admin --password admin

Once logged in creating users looks like this:

MongoDB Enterprise > use prod_database
switched to db prod_database
MongoDB Enterprise > db.createUser( { user: "prod_user", pwd: "XXXX", 
roles: [ { role: "readWrite", db: "prod_database" } ] } )

PHP MASTER

Date: 27th Oct 2018

WebBlog...WeBlog...Blog... see what I did there?

Ahh PHP long may it live, nearly 80pc of sites use it, so why I haven't I used it in a big way?

Laziness! Why bother writing it from scratch if someone already has made it for you?

Anyway, that's not the point of this post, on my new 'blog' in case you hadn't noticed... it's a very classic weblog. I'm writing it on the server as a php file (written in html actually) why? - I wrote one a long time ago, well, I copied one from a video with the idea of learning PHP, I seem to learn by doing things I want to do, moreso than things I have to do. Who doesn't? - Anyway the blog worked quite nicely, but there was no spam protection, on the comments so it got totally overrun and I had to ditch it. So no comments on this one!

Finding a better way

I tried embedding tumblr - which was great - I haven't really learnt markdown apart from working out how to write wiki articles so I mostly wrote it in the html editor, which was great. The theme was simple to modify and I slapped it into my site as an iframe (how delightfully 00s) and some scripts that tumblr needed. I wrote a few posts and was happy that it added it. I recently shared the page to a mate and he was all of a fluster as there were too many cookies and scripts to run (he's not really into that sort of thing) I tried it out in an incognito tab and guess what? No blog :(

Tumblr, since having to add a GDPR access thing, presents none of the site before acquiescing to giving them all your data. Bummer. So bye bye tumblr.

Then I tried blogger the effort from google, which is nice - I like Google stuff and it was immediate to get a site and subdomain. I don't think I've ever read that many blogs on it, unless they severely rewrote the html on the thing, man it does not look good embedded - far too many widgets and floaty things. I got it to look ok, and tried it out on my mate, he also found it too heavy with scripts and such.

Well screw that! All I want to do is:

  • Have a list of things I've ranted about so I don't do it twice (very possible)
  • Practice the hell out of writing html like it's markdown - why? IDK
  • also a list because maybe I forget things, also apparently writing about stuff cements it into ones brain - I need that
  • Also practising the hell out of creating stuff with vim, and hopefully speeding my typing up a bit (not really working but that's the idea)
  • Maybe learn some PHP, wasn't that what this post was about? OK time to get it back on track then!
Well - I can surely do that in a non complex way, I don't want bloody tag clouds or all that sort of thing, one big page of posts is fine, I don't need comments because - see above, sorry - I don't give a damn. OK to the point - I have a directory where my posts reside as html files, this is also cool as if I really wanted I can just write in plain text and post it if I really want - I figured some PHP would be ok to pull the files and 'include' them (amazingly useful already on my site for global nav and headers and footers) so that was good. but I did want the posts in 'normal' 'latest first' post order, otherwise a lot of scrolling would be needed. I looked around trying to get php to extract the timestamp of the file, but that was too tricky, must be something simpler - ah just prefix the filename with and ISO 8601 date, it's automatically in order (clever eh?) hmm that still gives the blog posts in oldest first :/ hmmm... PHP (and Stack Overflow of course) to the Rescue!

$list = (glob("posts/*.php"));
foreach ( array_reverse($list) as $filename)
{
echo '<div class="post">';
   include $filename;
echo '</div>';
}

There it is the magic code - get all php files in the order they come and put that thang into an array, then get all Missy Elliot an' flip an' reverse it (showing my age). So yeah it basically shows the list of files in reverse order, wraps it in a pretty post div and drops the file in - Yeah this will probably get a bit slow as more files are in there, but I can take it.

EDIT: Of course I can't quote php code with the script tags or it runs the damn php as php complete with div's even taking all the files and including it inside itself ad nauseum :D

Setting up a ‘modern’ SSL terminating caching server...

Date: Oct 26th, 2018

Back in the old days - you had the apache server. You set it up - defaulted to port 80 and away you went, adding sites as you went, maybe playing with password protected dirs or seeing what PHP can do.

Then SSL came around, OK no worries we add a port 443 bit to the config fiddle around and there we are, probably a redirect to force SSL.

Now recently I’ve been on slightly higher level systems if you can imagine that!. These are set up to squeeze the most out of the machine, the idea is a caching layer (we use varnish) to catch as many easy requests for static stuff; images, scripts, css, that sort of thing. You want to add a caching layer on port 80 for that so that browsers can hit it. What about SSL? So we can get apache to proxy so why not use that to proxy port 443 (this is the magic ‘SSL termination’ term) back to the caching layer port 80- where it can sit there pushing it to 443 where it will push it back to the caching layer. Wonderful! Hmm, except for every request on either port apache likes to invoke php, which on a busy server can be costly and a bit annoying as the reason all this is setup is to make the site super quick and more resilient to ‘hugs of death’ - It breaks down like this:

Port 443 SSL/TLS request

Nginx (port 443) --> Varnish (port 80) --> Apache (port 8080)

Port 80 http request

Varnish (port 80) --> Apache (port 8080) OR the preferred (someone actually took time to teach me) way!

Port 80 or 443 request for site:

Nginx (port 443/80) --> Varnish (port 6081) --> Apache (port 8080)

Which is a bit better as it catches 80 right off the bat, niginx has that going for it at least (it can have the listen for port 80 AND 443 in the same 'server' block, unlike apache and it's insistence on having entirely separate virtualhosts.

EASY! RIGHT? well... getting it set up from scratch is probably a lot simpler, in debian (so far) with apache already running happily, installing nginx fails a bit - the way out of that is to just stop apache, so their ports don't conflict.

Best to get Varnish running first. The trick with newer debian is that the varnish ports are set up in this file: /etc/systemd/system/multi-user.target.wants/varnish.service OR it could be in the classic /etc/defaults/varnish I have no idea why it can be either. Then make sure you reload everything to get it running. Basically you have to make sure all the ports are running and pointing to the right place. Need to get rid of apache's 443 ports in /etc/apache2/ports.conf and change 80 it to 8080 (or whatever really as long as it's available)

We need to add the backend to pass all this to varnish to cache and pass to apache.

# Proxy to Varnish locally
    location / {
    proxy_pass http://localhost:6081;
}

For nginx the ports are set up in the 'sites' so that isn't too bad, ssl goes in here, only port 443 of course, need to pass all that to localhost:80 Having been recently schooled, one can add 80 AND 443 in the same block, very handy.

For https to be passed properly needs this line, and the corresponding one in the apache conf, so everyone is on the ssl page:

proxy_set_header        X-Forwarded-Proto 'https';
proxy_set_header        X-HTTPS         'on';

At that point I generally use a Lets Encrypt cert to finish the whole thing off, note to myself - these are the lines for the nginx and more importantly the files which are made by certbot:

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot

Nginx Query Redirects and exceptions

Date: Oct 25th, 2018

So I had to add a bunch of nginx query redirects, I’ve never done any in nginx before, I have experience in doing them in apache, how hard can it be?

This was the best site I found which explained the mechanism of it - so I made the main query redirects there, but there were exceptions, I just add another location block yeah?

Narrator: It wasn't going to be that easy

I added the block and it didn't work, I didn't trouble to look at the log or why it broke (it failed the configtest) I figured that removing the location block was clashing with the other one, seems to make some sort of sense.

Had to look for another way then. So the approach I took might be wrong, there didn't seem to be a post about exactly what I wanted to do, but I persevered, and found a gist from 6 years ago which seemed to look like what I wanted: Here they use a map, which I've seen before, but didn't think it applied, well shocker it does. Important to note that the map has to go outside the server block, which took me a while to realise, before carefully looking at the gist again - I still don't get it TBH. The rough principle was to get the locations blocks to do the heavy lifting of regexing the main redirects like this:

location /path/bloody.asp { if ($query_string ~ "^CategoryID=[0-9]*$"){ rewrite ^.*$ https://www.newdomain.com/new-url? break; } }

Great, so those are happy. all works, there is another block I put in based on the above to catch some more in a different location - which leaves the edge cases. Put the map up top for some reason, looks like this:

map $query_string $new_url { ~id=123\?special_case https:/newdomain.com/new-url; ~id=321; }

Don't forget the damn ';'s at the end of every damn statement and to escape any regex chars like '?.|' and so on I think the special sauce is actually in the server block here:

# 301 Moved Permanently if ( $new_url ) { return 301 $new_url; }

Which Of course I took from the gist I'm assuming that is somehow matching it up to map.

SSH Reverse proxy tip

Date: Jul 7th, 2018

I make an SSH proxy connection to a server which allows me to then use an extension which proxies web traffic - so I can get into an internal network, which is otherwise locked down to any other traffic by firewalls. A colleague showed me how to do that years ago and for all those years I’ve ran the command, which logged me onto the server, but ended up with a shell which ends up redundant as I have a separate tmux session for any work.

Here is the basic command if you want to do that:

ssh -D 9999 you@yourserver

So that's nice, you have a shell on your remote server and it has a proxy link you have to configure in your webbrowser.

I didn't really like that as I have a few special commands and aliases on my local shell, so I want my local shell back and not have a window I just end up hiding somewhere. Finally.... I can reveal to myself how to fork SSH into the background, then it releases my shell also doing all the stuff it did before:

ssh -fND 9999 you@yourserver

I jammed them all together so it might be hard to decipher; The -D is to pull/open the remote port you want to piggyback, the -f is to fork into background -- so it doesn't take the local shell over, -N is to not run any command, which equates into not getting a shell up. Awesome.

Due to a Popular tweet of mine:

Turns out this might be use for some people. Can also be used inside a normal SSH line and as someone pointed out in the thread, can be added to .ssh/config which is nice.