1. What you did, to get it. Please add all your steps, so we can find what you did wrong.

2. IRIS container changed nothing what comes from base Ubuntu image. If you can install mc there it should be possible on IRIS as well. But I don't see any reasons, why mc should be available inside the container. And why you need it there?

It's not so much important to have key before install, much more important to have it when server is running.

But how you sure that your key is suitable for this platform. You can check it on running container, where you can enter inside, and go the csession. You can find some intersting methods for $SYSTEM.License in the documentation which can help you to check license file inside the container.

Most of the reasons for getting such error is just missed license file or exceeded license limit.

Just check it, you can mount it during docker run or copy it inside the image during docker build.

I see that you use quite an old version, I would recommend considering usage latest version based on IRIS. Due to many limitations of using such an old version in Docker.

Some time ago I did an example of Angular Application with IRIS on a backend.

Source for this project available on gitlab.

How to develop Angular application, you should look at angular documentation, and all about frontend development. There are some tools, which helps to develop and build your frontend side. Such as webpack, which do most of work related with build your sources to  production ready environment.

In my simple project, you need only docker, and any editor, preferable VSCode.

By command,  you will get a running server in development mode. So, you can edit Angular code, and IRIS code, and see an immediate result.

docker-compose up -d

And this project also deployable with Kubernetes. So, after any push of changes to gitlab, it will build, and test it.

node_modules, never goes to source control. It's enough to have package.json and package-lock.json there. node_modules may contain platform specific libraries.

Hope I will manage to write a complete article about this project, with all the details.

There is also ClassMethod GetGlobalSize in the class %Library.GlobalEdit , where you can select a fast way to count or not, and you will get a different result.

 ClassMethod GetGlobalSize(Directory As %String, GlobalName As %String, ByRef Allocated As %Integer, ByRef Used As %Integer, fast As %Boolean = 0) as %Status

Get size of this global
'Allocated' - total size, in MB, of blocks allocated for the global.
'Used' - total used data, in MB, for the global.
'fast' - TRUE : faster return, it won't return the value of 'Used'.
FALSE - slower return,, it returns values for both 'Allocated' and 'Used'.

So, when fast, it just counts blocks and don't care how those blocks fill by data and multiply the number of blocks on Size of the block.

Used, counts only when you pass fast=0, and it calculates exact size, and to be more accurate reads all blocks, so it could be slower.

Visual Studio and Visual Studio Code are two very different products but just with similar names.

To configure Visual Studio Code, you can use this settings

{
    "objectscript.conn": {
        "active": true,
        "host": "localhost",
        "port": 57772,
        "ns": "SAMPLES",
        "username": "admin",
        "password": "SYS"
    }
}

Where,

  • active, should be true, if you going to be connected
  • port should point to Web server port, and not super port
  • username/password, the user should have enough permissions with role %Development

I'm not an InterSystems guy and can say only from my point of view, how it works.

Every global name has internal representation in some kind of binary format, I don't know how it can be converted to and back. But this string used to find a correct block. Like when you looking for ^C(9996,46, yellow), it first should read Map (Block 3), to find where Global ^C started (Block 44), then using this internal format for global, it can find the closest node in the first pointer block if it points to another pointer block, the same search repeats until it reached any Data block, which may also contain data for multiple nodes. 

Not sure If I can better explain it, but most important is that B* tree helps to very quickly find the final block, and their neighbours.