PS: If you are getting worried about Docker, you shouldn't.

There is already version 1.0.1 of the Open Container Initiative (OCI) that was set up so that a standard might be agreed & adhered to by implementers for how containers run and images are dealt with. Docker and other prominent players like CoreOS (now owned by Red Hat), Red Hat itself, SUSE, Microsoft, Intel, VMware, AWS, IBM and many others are part of this initiative.

One would hope we should be safeguarded on our investment.

NP Sebastian.

You could handle %SYS persistence with creativity :)

Given that you'd have a script or a container orchestrator tool (Mesosphere, Kubernetes, Rancher, Nomand, etc.) to handle running a new version of your app, you'd have to factor in exporting those credentials and security setting before stopping your container. Then as you spin up the new one you'd import the same. It's a workaround if you like but doable, IMO.

Having a DB you probably have to think about schema migration and other things anyway so... 

HTH

Hi Sebastian,

Caché and Ensemble will be fully supported in a Docker container. However, for the reason expressed above, right now there is no plan to offer Durable %SYS in Caché and Ensemble.

InterSystems IRIS is a new product so things are different. You will find that some features are deprecated in favour of others, etc. One of them is the license key. In general, in my comment above, I was referring -at least in my mind, to a licensing plan that you will hopefully find more flexible and favorable.

Hi Dmitry,

Thank you for downloading our InterSystems IRIS data platform container.

Our images are carefully crafted, dependencies are checked and even pinned so we know exactly what we ship. We further test them regularly for security vulnerabilities. By the time our images are published they are a safe bet for you to use. In general, we expect our customers to derive theirs from the published one so you only have to worry about implementing your app-solution in it.

However, I understand that you might want to create your custom image. We could make isc-main available if there is the request for it.

Password: we do not want to be in the news like a known database that recently was discovered with thousands of instances in the cloud up and running with default credentials. We are forcing you to do the right thing otherwise it's too easy with containers to ignore this and as you can appreciate it's not a safe practice.

This is true also when you'll use the InterSystems Cloud Manager to provision an InterSystems IRIS data platform cloud cluster. If you forgot to define the password for the system users, you will be forced to create one before the services are run.

Hi Rob,

I think the article is confusing and not clear. There are various technologies that are coming into play here and simple terms like volumes are just not enough to comprehend the context they are used within. I wish IT could speak like in Edward DeBono "The DeBono Code Book". Things would be clearer, maybe...  :)

This post could degenerate in a long essay so I'm going to try to focus on explaining the basics. My hope is that future Q&A in this thread will have the opportunity to clarify things further and allows us all to learn more as technologies are developed as we sleep... 

The wider context that the marketing person from RH is writing about is that of containers & persistent data. That is enough to write a book :) In its essence, a container does NOT exclude persistent data. The fact that we see containers described as ephemeral is only partially true because:

  • They actually have a writeable layer
  • They force us to think about the separation of concern between code and data (this is a good thing)

The other part of the context of the article is that the writer refers to Kubernetes, a container orchestrator engine that you can now find in your next Docker CE engine you'll download. Yes, Docker has Docker Swarm (their container orchestrator) featured in the engine but you can now play with Kubernetes (K8s) too. In the cloud-era everybody is friend with everybody :)

Now, K8s was select last December as the main application-orchestrator for OpenShift (a cloud-like, full application-provisioning, management platform owned by Red Hat). That was a major change: no more VMs to handle an application; just provide containers and let K8s manage them.

Now, we all know that having 100 Nginx containers running is one thing but having 100 stateful, transactional & mirrored Caché, Ensemble or InterSystems IRIS containers is a completely different game. Well, you would not be surprised to know that persistence is actually the toughest issue to solve and manage... with &  without containers. Think of an EC2 instance that just died. What happened to that last transaction you committed? Hopefully, you had an AWS EBS volume that was just mounted on it (i.e you did not use the boot/OS disk) and the data is still in that volume that has a life of its own. All you have to do is spin up another EC2 instance and mount that same volume. The point being that you must know that you have an EBS volume and that you take very good care of it and of its data: you snapshot it, you replicate the data, etc.

The same exact thing is true when you use K8s. You must provision those volumes. Now in K8s volumes have a slightly different definition: yes it defines the storage and has a higher-level abstraction (see their docs for more info) but the neglected part is that a volume is linked to a POD. A POD is a logical grouping of containers that has its definitions and lifecycle, however, a K8s Volume has a distinct life of its own that can survive the crash of a POD. We get into software defined storage (SDS) territory and hyperconverged storage/infrastructure. This is a very interesting and evolving part of the technology world that I believe will enable us to be even more elastic in the future than we are now (both on-prem and public clouds).

K8s is just trying to make the most of what is available now keeping a higher level abstraction so that they can further tune their engine as new solutions comes to market.

Concluding,  in order to clear some misconceptions the article might have created:

There aren't any special data-container in K8s. There are, however with Docker containers (just to confuse further :) but you should not use them for your database storage).

With K8s and Docker there are SDS drivers/provisioners  (or plugins in Docker lingo) that can leverage available storage. K8s allow you to mount them in your containers. It is supposed to do other things like formatting the volumes but not all drivers are equals and as usual YMMV.

What those drivers and plungins can do for you is mount the same K8s PV from another spun up container in another POD if you original container or POD dies. The usual caveats apply like if you are in another AZs or Region you won't be able to do that. IOW we are still tied to what the low-level storage can offer us. And of course, we would still need to deal with the consistency of that DB that suddenly died. So, how do you deal with that?

With InterSystems IRIS data platform you can use its Mirror technology that can safeguard you against those containers or POD disappearances. Furthermore, as our Mirroring replicates the data you could have a simpler (easier, lower TCO, etc.) volume definition with lower downtime. Just make sure you tell K8s not to interfere with what it does not know: DB transactions ;)

HTH

In general, it is a good thing to have these 2 concerns (code vs data) separate. You're de-coupling 2 items (concerns, concepts) that are effectively different. When using technologies like containers & container services this point becomes even more apparent and allow you to be much more flexible and agile in your deployments. Of course, schema migration is still an issue, but the separation still helps. This is one of the reason why orgnaizations can perform multiple deployments per day.

HTH