Getting IP location information with Angular 7

Using Angular Maps Components and a new service called ipapi, you will be able to quickly put together something that will allow you to get IP information from a client and put it on a map.

Angular Maps Components is really great, and the setup with ipapi is a no-brainer (they have a free tier for 30,000 requests or under). It literally took me more time to wait for the angular project to be set up than to implement the whole thing!

The code is in github:

How to connect to SAP HANA using JDBC

Recently I had to connect a Java application to SAP HANA and I made some notes along the way:

The first step is to get the SAP HANA JDBC driver, a file called ngdbc.jar. The quickest way is to download the SAP Hana Cloud Platform SDK from here:

Choose the latest “Java Web Tomcat 8” from the download section (a package starting with neo-).

Unzip the archive to any location in your machine.

Extract the JDBC driver (ngdbc.jar) from the archive. You will find the driver in the archive inside a hidden folder under: repository/.archive/lib/ngdbc.jar)

Use the driver with the connection string


Where the port is


So if your instance number is 10, the port would be 31015.

The custom driver class name is



How to build an API for SAP HANA using strongloop/loopback

One of the aspects I like the most about SAP HANA is the cloud app development environment that allows you to quickly put together a data-entry app using Fiori.

Recently, I wanted to build a small JavaScript app for data querying and entry using the awesome ag-grid. The data was in SAP HANA but the prospect of building and testing a secure API was quite daunting (is it worth it? How long is it going to take? Who’s going to maintain it?) It was actually easier to switch to MongoDB, use Express or Parse and add an ETL process to sync the databases. Wouldn’t it be great if there was a way to create some sort of automatic API through configuration?

There is.

The Loopback component of Strongloop offers the possibility to quickly create secure APIs for CRUD operations against MySQL, Postgres, Oracle and other databases. In many cases, it allows you to completely bypass the development of boring and —commoditized— backend stuff. Using a convention over configuration approach, you can create endppoints for each of your tables in a matter of minutes.

But, can it connect to SAP HANA?

I googled and found a connector for HANA

The best way to set this up is to containerize the solution: create a docker container with an installation of strongloop and link a directory in the host machine to the working directory of the container, that way you keep the configuration outside of the container to quickly modify it, and you can quickly switch or upgrade the container.

I started from node official image, but you can start from the strongloop official image, and created my own, which you can use. Here is a link to my image.

FROM node

MAINTAINER Daniel Pradilla <>

RUN npm -g config set user root

RUN npm install -g --unsafe-perm strongloop 

RUN npm install loopback-datasource-juggler

RUN npm install loopback-connector-saphana


Once you start the container, you need to create a connection to SAP HANA

docker run --name loopback -p 3000:3000 -v `pwd`:/app/ -t -i danielpradilla/loopback slc loopback:datasource

Then, you have to edit your server/datasources.json file and manually specify the schema name (This is something that you don’t need to do with other databases, and you may get stuck on a table not found error if you don’t do it)

  "db": {
    "name": "db",
    "connector": "memory"
  "hana": {
    "host": "my_server_address",
    "port": my_server_port,
    "database": "MY_DATABASE_NAME",
    "name": "hana",
    "user": "my_HANA_user",
    "password": "my_hana_password",
    "schema": "MY_SCHEMA_NAME",
    "connector": "saphana"

Then you create an endpoint to the table using the wizard

docker run --name loopback -p 3000:3000 -v `pwd`:/app/ -t -i danielpradilla/loopback slc loopback:model

Or using arc by running the API interface

docker run --name loopback -p 3000:3000 -v `pwd`:/app/ -t -i danielpradilla/loopback slc arc

And after that, you are ready to experience the awesomeness of having all the API endpoints created for you.

docker run --name loopback -p 3000:3000 -v `pwd`:/app/ -t -i danielpradilla/loopback slc run .

The next step would be to secure the API using microgateway for API Key validation, OAuth 2.0 and rate limiting,

Linear optimization with or-tools: containerizing a gunicorn web application

Previously, we left our app working with our local python+gunicorn+nginx installation. In order to get there we had to do quite a bit of configuration and if we wanted to deploy this in a server or send it to a friend, we would have to go through a very error-prone process subject to version changes and missing libraries. A potential nightmare if we contemplate switching from one operating system to another. Is there a way in which we could combine our code and configuration in a single easy to deploy multi-platform package?

Get the code here

One solution for this is to create a single Docker container that, when run, will create the environment and deploy our code in a controlled environment.

In the Docker hub you will find thousands of preconfigured containers. The best way to start is to find the closest one that would suit us and customize it. That way you avoid laying the ground work and just focus on the specifics of your application.

I tend to trust the containers built by larger vendors, organizations or open-source projects, because I find that they usually keep their containers up to date and —most importantly— they are heavily battle-tested in dev and production.

In this case, I chose a gunicorn container created by the Texas Tribune. To start, you download and install Docker, and then download your chosen container to your machine.

The way to customize a Docker container is to edit the Dockerfile. There you will specify commands to install, copy or run files specific to your project. In our case, I added an installation of python-dev, falcon and the google or-tools:

#install whats neccessary for or-tools
RUN pip install --upgrade pip
RUN pip install --upgrade wheel setuptools virtualenv
RUN apt-get -y install flex bison
RUN apt-get -y --fix-missing install autoconf libtool zlib1g-dev texinfo help2man gawk g++ curl texlive cmake subversion

#install gunicorn and falcon for providing API endpoints
RUN pip install gunicorn==19.6
RUN pip install falcon

#install or-tools
ADD or-tools_python_examples_v5.0.3919.tar.gz /app
RUN mv /app/ortools* /app/ortools && cd /app/ortools/ && python install --user


Then I created separate configuration files for gunicorn and nginx, and a couple of supervisor configurations. Supervisor will restart the services in case one of them goes down, which might happen if I introduce an unrecoverable error in the python script:

#copy configuration files
ADD /app/
ADD gunicorn.supervisor.conf /etc/supervisor/conf.d/
ADD nginx.conf /app/
ADD nginx.supervisor.conf /etc/supervisor/conf.d/

After the initial configuration, we build using the docker build command:

docker build --no-cache -t danielpradilla/or-tools .

And then, we run the container as a daemon:

docker run -p 5000:80 -d --name or-tools-gunicorn danielpradilla/or-tools

The web server port is specified as a parameter. This maps port 5000 in localhost to port 80 in the container.

Now, time to install our code. You can copy your code to the Docker container, but what I prefer is to have my code in a local folder in my machine, outside##italics of the Docker container. That way, I don’t need to copy the code to the container every time I change it, and I keep a single unmistakable copy of the code.

To do this, you mount the local folder as an extra folder inside the container. Change the Dockerfile and add

VOLUME ["/app/logs","/app/www"]

And then when, you run the container, you specify the location of your local folder:

docker run -v :/app/www -p 5000:80 -d --name or-tools-gunicorn danielpradilla/or-tools

This will allow you to experiment with multiple versions of the code (production and development) with a simple parameter change. You can run two docker containers pointing to different folders and opening different ports, and then compare the results side by side!


Get the code here

10 things I learned while setting up five masternodes

Photo by Denys Nevozhai on Unsplash

Over the past few weeks, I’ve been experimenting with masternodes as alternatives/replacements to traditional crypto mining rigs. Like with many other crypto-related things, I was surprised to find such a huge community and wealth of options. It’s akin to opening a window into another world.

What interests me the most is to learn to what extent Proof of Stake has the potential to replace Proof of Work, and the best way to learn —apart from formal reading— is to set up your own.

Masternodes deliver on the promise of you being an enabler of a decentralized network of value exchange by locking or “staking” a fixed amount of coins in exchange for the privilege to transmit or verify transactions. Basically, you buy a fixed amount of coin, say 1000, and lock them in a masternode.

I picked 5 projects at different price points: ALQO (XLQ), Ellerium Project (ELP), Rampant (RCO), High Temperature Coin (HTRC), and Madcoin (MDC). Just by the names, it sounded like a bad idea, but I cannot afford and will ever afford dumping $400K into a Dash masternode. Also, these… um, “coins”, offered the promise of a high risk/reward investment and the always underestimated chance of learning something by making a fool out of myself.

I had low expectations, I wanted some education and the possibility for the experiment to pay for itself with the rewards from HODLing the coins.

So, what did I learn?

1 You can set up a masternode anywhere, but it’s best if you get a VPS

The masternode can be any machine connected to the internet, but you need a fixed IP address. Exposing your home network to attackers is a bad idea, so the standard procedure is to get a VPS from a hosting provider and set up a masternode there.
I got a VPS from oOVH, just because they had an offer for a year-long plan of 2GB/10GB at €2.5/month;


2 It’s a scammers free-for-all

In an industry already filled with pyramid schemes, masternodes offer scammers an almost-frictionless way of stealing our money. See this article for a lengthy description of the different scamming methods.


3 It’s all —almost— the same code base

This one was quite surprising. All the clients I tested come from the same origin. I believe it’s either the Bitcoin or the Dash client (haven’t checked), they have all the same names for their command line tools and the same options.

However, I noticed some code smells: the clients for High Temperature Coin and Madcoin consist on a single application to run the daemon and query the status of the masternode, whereas ALQO uses the more sensible alqod as a daemon and alqo-cli for client-related queries.

I guess this makes it even easier to swindle a couple of hundred people.


4 Cheaper coins are harder to set up

You want the easiest setup procedure? There’s a markup for that. The best developers/marketers flock to the most popular projects. They are better debuggers, keen on following up on errors and setting up good documentation.

ALQO, the “premium” coin in this case, has flawless setup procedures. They also offer a monitored VPS themselves for $9.99 with minimal setup effort, a clever move by the team, given the hefty markup they charge. But on the other hand also worth it, if you don’t want to invest a few hours tinkering with settings and another chunk of your time monitoring if the masternode is still up.

My life saver was Nodemaster, an excellent tool that allows you to install around 60 different masternodes by just running a script.


5 You can set up more than one masternode per server

It kinda defeats the purpose of a supposedly decentralized network, but some masternode coins allow you to start up more than one daemon per machine, if you configure the ports correctly and you have extra IP addresses. As long as you don’t use the same IP and ports you can start as much daemons as your memory allows. Each of the daemons consume around 250 to 400MB.

This is a cheap way to hedge your bets: get onto several cheap-ish coins, find a high-memory, vps and load it up as much as you can.


6 I found a use case for Discord

No amount of customization will make me choose Slack over the traceability of a 15-year-old email inbox. But I found that almost all of these coins use Discord for their community engagement and support and, turns out, it works extremely well. I was able to get responses to my queries within minutes without the noise that Twitter brings. It works just like IRC did 20 years ago 😉


7 Decentralized exchanges offer the future now

Most of these coins need to be bought at decentralized exchanges. Learning how these exchanges work was worth all the trouble. They are one of the best representations of how we can become fully independent of banks and clearinghouses… or maybe we’ll never get there, but decentralized exchanges sure are extremely efficient and automated intermediaries.


8 It works!

I was amazed when I received my first reward. Mere cents, but satisfying nevertheless because it is, essentially, free money (after costs)


If you want to get into this, I have two recommendations… and this is NOT investment advice:

9 Check your expectations about how long-term can this be

If you are doing it for learning purposes, don’t overthink it. But if you’re planning medium term (months) or more, you need an exit strategy. Setting up a masternode might take you anywhere from 8 hours to 30 minutes, depending on the transaction time, network speed, who’s your VPS provider, and how good are the coin developers. Make it worth your while. The majority of the masternodes I’ve seen are short-term scams looking to make a million or two. You have to ask yourself when and how you are going to shut down the masternode and stick to that plan. Don’t be the last dummy holding, keep checking the volume of the exchanges where the coin are available.


10 How to pick up the right coin

Check the coin’s Discord or Telegram channel. Look for signs of trouble in the support area and look at how lively the community is. Do the team members write in a language they understand? Do they write at all? Spam and shitposting are signs of a badly-maintained community. The developers might be in the Bahamas by now.

Check other social proof: how many followers they have on Twitter, are they real or purchased followers? Do they seem to know what they are talking about? How many committers the project has on GitHub. A not-so-surprising majority of these projects have only one committer in Github. Either he has an earth-shattering idea or he’s in for a quick win.
Also-important-but-weirdly-enough-not-so-really: Does this coin has a purpose? Is it filling a real-world need?

In you will find a listing of a lot —if not all— of these coins. One of the elements of this list is the ROI (Return On Investment). Don’t fall for it. ROI can be made into whatever they want with proper monetary policy and price manipulation, especially in “young” coins with a few masternodes. Check the daily volume (the total value of daily operations) and the total market capitalization, take the two numbers and divide them, take the masternode worth (amount of US$ in coin you need to stake) and find a combination you like for the three numbers. Open Excel, do your own research.

The volume tells you the most brutal of truths: the price might be attractive, but if you cannot get your money out, it’s worthless.

Again, this is NOT investment advice. In fact, I guarantee that you will gain experience and lose whatever you invest! That should be your default expectation.