In my previous post: How to deploy website to openshift with nginx, I simply introduced how to use the Nginx Builder Image to deploy a website on Openshift.
Reverse Proxy configuration, however, is not part of web content. Thus it is not able to directly to use the Builder Image to setup the config and we will need create a real Nginx Image which can be futher configured with reverse proxy table.
We are creating the Nginx Base Image based on RedHat certified Nginx Builder Image (rhscl/nginx-112-rhel7).
First, let’s create an app that has an empty index.html
page from local directory (e.g. $HOME/webfolder
):
1 | oc new-app registry.access.redhat.com/rhscl/nginx-112-rhel7~$HOME/webfolder --name=nginxbase |
This will create corresponding resources like buildconfig
, deploymentconfig
, imagestream
, service
etc for nginxbase
in Openshift.
Once resources created successfully, let’s build the image
1 | oc start-build nginxbase --from-dir=$HOME/webfolder |
This will create the Nginx Base image and push to registry in Openshift.
Once the nginxbase
image being created, we could remove all other resources:
1 | oc delete bc/nginxbase |
We will need create a custom image for reverse proxy. First, create an empty folder and add following files:
In Dockerfile
, add following content:
1 | FROM nginxbase:latest |
When nginx server starts, it will automatically load configuration files *.conf
in /opt/app-root/etc/nginx.default.d
folder into its default server definition closure. So we add our nginx-proxy
configuration file into that folder.
In nginx-proxy.conf
file, we could add reverse proxy configuration. For example:
1 | location /userProfile { |
Next, create reverse proxy app in Openshift:
1 | oc new-app --strategy=docker nginxbase~<path to dockerfile> --name=myReverseProxy |
Just be aware we have to use docker strategy
as app’s build strategy so that Openshift knows that we will upload a Docker project.
Start Build:
1 | oc start-build myReverseProxy --from-dir=<path to dockerfile> |
Once the build finished, Openshift should automatically deploy a new pod up and running. We could expose the reverse proxy by running
1 | oc expose svc/myReverseProxy |
and access the nginx reverse proxy through the routes created.
]]>Goto Mnishift Github Release Page, it is able to see the binary assets of Minishift for different operation systems. By writing this article, the latest versino of Minishift for MacOS is minishift-1.17.0-darwin-amd64.tgz.
Once downloaded, expand the tar file and copy the minishift
file to /usr/local/bin
folder
1 | # untar the downloaded file |
Before Minishift can run, we need setup the virualization environment which supports Openshift running under MacOS environment.
When doing this step, I assum you have brew installed /configured correctly in your computer.
You can simply copy and paste code below and run in a terminal.
1 | # Update brew first |
Once setup virtualization environment successfully, let’s start minishift:
1 | minishift start |
For first time running, it will download all related dependencies including images / cli binary .
After starting successfully, it is able to see the allocated IP address to the master node of the openshift cluster. In my case it is 192.168.64.2
.
Then you should be able to open https://192.168.64.2:8443
in your browser and login with username and password both as admin
.
To shutdown your minishift cluster simply run:
1 | minishift stop |
First, download Node.JS installer according to target operation system from here
The installer downloaded should be executable on target operation system (e.g. .msi
on Window and .pkg
on MacOSX)
Follow the wizard and install Node.JS to target machine.
Once installation finished, open a bash
or cmd
. Installation should have both node
and npm
commands exposed.
1 | $ node --version |
It is essential to have both commands running correctly.
]]>nginx
docker image does not work on openshift.In this post, I will simply go through how to run this Official RedHat nginx image on Openshift and deploy a website onto it.
So, have your openshift cli tool (oc) ready, let’s get started.
With oc
tool, create a new app.
1 | #oc new-app [BuilderImage]~[Source Code Repo] |
The openshift will pull the image from the registry and register it locally as builder image which will allow building images along with source code of website. It creates build config
.
Then it will pull source code and assemble
it with build image to produce another image stream.
This command will also create deployment config
and service
.
If you have no accessible git repo, it is able to build againt local source.
Slightly different, create a new app on current folder
1 | #oc new-app [BuilderImage]~[Source Code Repo] |
This will not actually upload the source code from current folder to Openshift but just create a build config. Thus we need to start the build with extra parameters:
1 | oc start-build myapp --from-dir=./ |
with --from-dir
param, oc
will upload content of current directory and builder image will assemble
the code.
Once app is created and built, we could expose it through router
.
1 | oc expose svc/myapp |
Once it is exposed, your nginx server and your website should be accessible with the route associated.
]]>XMLHTTPRequest
(ajax) transporting data between client and server has been popular for a while. Sometimes, we want our browser to retrieve binary data from server (as ArrayBuffer
or Blob
) such as pdf, image, and psd files. This post will go through how to achieve it with XMLHTTPRequest
and jQuery
.For XMLHTTPRequest
, just simply setup the responseType
of XHR
instance to either arraybuffer
or blob
. Example:
1 | var xhr=new XMLHTTPRequest(); |
$.ajax
does not support either arraybuffer
or blob
as its dataType
. Thus we need write a beforeSend
handler:
1 | //setup ajax |
For more information about responseType
, take a look at this.
More browser support details can be found here
To make your online only web app (like https://studio.psdetch.com) working offline, your web app needs to use https
for security request.
Very simple. The steps below works on any web apps:
sw-toolbox
service-worker.js
at the root folder of your web app. Here, the location (root folder) is important.index.html
1 |
|
service-worker.js
. You can follow the tutorial here. But most situation you can just use following scripts:1 | importScripts("/path/to/sw-toolbox/sw-toolbox.js"); |
You can check what I have used for psdetch.
And that’s it. Your whole web app can now work offline. With fastest
strategy, response is always returned from Cache and updated immediately if possible. This dramatically increased user experience.
1 | openssl req -nodes -new -x509 -keyout key.pem -out cert.pem -days 365 |
To setup a brand new raspberrypi zero w you need prepare following tools:
All coneectors can come from seller when you purchase Raspberrypi zero w.
You can download official Raspbian OS from here. If you want use other operation systems, it is same setup setups.
Be aware, writing image to micro sd card will wipe all data on it
Connect micro sd card to your mac through micro sd card reader.
Once you can see your sd card in finder
, you need find out the disk
in system.
Open a terminal
and run diskutil list
:
Find your sd card disk by its name. On my machine, the /dev/disk2
in red rectangle is the disk
in system and copy disk2
only (not /dev/disk2
) into clipboard.
Unmount the sd card from finder but do not unplug it.
Now, we need flash the image downloaded to sd card. You may need unzip it if the downloaded file is zipped by double click the file in finder. Below is the command to use:
1 | sudo dd bs=4M if=<path to downloaded .img file> of=/dev/r<disk> |
Replace the path
and <disk>
to your own one.
Now insert the flashed micro sd card to Raspberrypi Zero W and connect everything and you are ready to go.
[Picture]
dd
to write the disk. See above.This blog will go through how to change the location of screenshots to another folder.
Create a new folder anywhere using finder
or terminal
. Here I created blog_statics
folder in my Google Drive folder.
Click the created folder in last step and press Command+C
to copy the full path to clipboard.
Press Command + Space
and type terminal
to open terminal
In terminal, type following command:
1 | defaults write com.apple.screencapture location <Folder Location> |
Note the Command+V
to paste it from clipboard
It should give following result:
Now all new screenshots should be stored in that place.
]]>Below reference pages were used while doing my study:
]]>1 | docker run --privileged -i -t -d --restart=unless-stopped -p 2376:2376 -p 10000-11000:10000-11000 -p 10000-11000:10000-11000/udp -v /mnt/opt:/opt:rw -v /etc/docker:/etc/docker:ro -v /mnt/var/lib/docker:/var/lib/docker --name=docker docker:dind -H tcp://0.0.0.0:2376 --storage-driver=aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem |
apt-get install netcat
1 | echo "content goes to server" | nc -u <ip> <port> |
1 | .select-type-1{ |
self-explained.
]]>overlay
network on multiple hosts over swarm, following are required:1 | --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" |
All swarm-agents should have these options otherwise it will be likely get this error:
1 | Error response from daemon: 500 Internal Server Error: failed to parse pool request for address space "GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault (most likely the backing datastore is not configured) |
If using docker-compose, there is nothing need to do. As docker-compose will automatically create defaul network if:
Once docker-compose file finished, just run docker-compose up -d
which will create network correspondingly.
otherwise simply use following command at your swarm
:
1 | docker network create --driver overlay --subnet=10.0.9.0/24 my-net |
There is limitation for docker-compose build as it cannot find the target node to build the image.
The only way currently is to build on the node and tag it rather than on swarm.
1 | docker build -t <tag name> path/to/dockerfile |
1 | var ip = (req.headers['x-forwarded-for'] || |
Unit testing in Angularjs is using (by default) Jasmine and Karma.
Also, angular-mocks needs to be installed. It is needed for injection and some other mock objects.
1 | bower install --save angular-mocks |
Following npm packages are needed and add them as devDependencies:
1 | "jasmine-core": "^2.3.4", |
Configuration for karma:
1 | module.exports = function(config){ |
Put this file at root of project.
npm i -g karma-cli
karma start karma.conf.js
This will watch all files and re-run tests if any file changes. Strongly recommended having this opened when developing.
Once above setup are done, it is ready to write unit tests.
Add test_user.js
to www/app/user/
folder or similar. Just keep test_
file name prefix.
1 | describe("user module",function(){ |
Above, it first injects user module and then runs simple tests.
If component depends on other providers, it’s able to use jasmine.createSpy
to create dummy function.
1 | describe("account module",function(){ |
Example:
1 | describe("downStreamStore",function(){ |
the test above is synchrounous but with promises. Therefore no need use Jasmine async with done
;
1 | var schema=new Schema({ |
The hash
will be called mainly on following scenarios:
When a new doc being created.
1 | model.create({password:"12345"}) //password will be hashed |
When set a value to a doc.
1 | doc.password="22222" // 22222 will be hashed |
However, this will not work for update
query:
1 | model.update({_id:<id>},{$set:{password:"12345"}}) // password will not be hashed |
For password, it is able to write beforeUpdateHook
1 | schema.methods.beforeUpdateHook=function(data){ |