While we moved our application to AWS, and we have some data, which we need to keep for a while. With this feature, we can move old data to a cheaper storage.

I believe the ability to move to a cheaper storage is mostly the case. Another option is that some table is too big, and someone would like to split it to be stored in multiple different databases, together with the indexes.

I don't see any difference

Time taken: 23.218561416957527 seconds to insert 100000 at  4306.899045302852  records per second.

Time taken: 23.179011167027056 seconds to insert 100000 at  4314.247889152987  records per second.

from sqlalchemy import create_engine
import numpy as np
import time
import iris

hostname = "localhost"
port = 1972
namespace = "USER"
username = "_SYSTEM"
password = "SYS"

# Create the SQLAlchemy engine
DATABASE_URL = (
    f"iris+intersystems://{username}:{password}@{hostname}:{port}/{namespace}"
)
engine = create_engine(DATABASE_URL, echo=True)
args = {
    "hostname": hostname,
    "port": port,
    "namespace": namespace,
    "username": username,
    "password": password,
}
connection = iris.connect(**args)
# connection = engine.raw_connection()

# Generate data for each row (50 fields)
columns_count = 50
drop_table_sql = f"DROP TABLE IF EXISTS test_table"
columns = [f"field_{i + 1}" for i in range(columns_count)]
create_table_sql = f"CREATE TABLE test_table ({', '.join([f'{col} DOUBLE' for col in columns])})"

num_records = 100000
# Define SQL insert statement
sql_insert = f"INSERT INTO SQLUser . test_table ({', '.join(columns)}) VALUES ({', '.join(['?'] * columns_count)})"
record_values = []
# Execute SQL insert
try:
    start_time = time.perf_counter()  # Capture start time
    batch = 0
    cursor = connection.cursor()
    cursor.execute(drop_table_sql)
    cursor.execute(create_table_sql)
    connection.commit()
    for _ in range(num_records):
        record_values = [np.random.rand() for _ in range(columns_count)]
        cursor.execute(sql_insert, record_values)
        batch = batch + 1
        if batch >= 10000:
            connection.commit()
            print("Batch inserted successfully!")
            batch = 0
    connection.commit()
    end_time = time.perf_counter()  # Capture end time
    elapsed_time = end_time - start_time
    print(
        f"Time taken: {elapsed_time} seconds to insert {num_records} at ",
        num_records / elapsed_time,
        " records per second.",
    )
except Exception as e:
    print("Error inserting record:", e)
finally:
    cursor.close()
    connection.close()
    engine.dispose()

I think you just confusing $listbuild with list in BPL context. In BPL when you define it as List Collection, it will use class %Collection.ListOfDT, or in case of array it will be %Collection.ArrayOfDT

That means, that you should use 

 if 'context.Facilities.Find(##class(Ens.Rule.FunctionSet).SubString(context.EpicDepartmentID,1,4))
 {
   do context.Facilities.Insert(##class(Ens.Rule.FunctionSet).SubString(context.EpicDepartmentID,1,4))
 }

The support for NodeJS in IRIS is quite primitive and limited to only native functions, globals, methods, no SQL

check in IRIS folder, what do you have installed with IRIS for nodejs

I don't have windows, only docker, and in my case

/usr/irissys/bin/iris1200.node
/usr/irissys/bin/iris800.node
/usr/irissys/bin/iris1600.node
/usr/irissys/bin/iris1400.node
/usr/irissys/bin/iris1000.node
/usr/irissys/dev/nodejs/intersystems-iris-native/bin/lnxubuntuarm64/irisnative.node

If you want to use IRIS SQL from nodejs, you can try my package, which you can install with npm

npm install intersystems-iris
const { IRIS } = require("intersystems-iris");

async function main() {
    const db = new IRIS('localhost', 1972, 'USER', '_SYSTEM', 'SYS')
    console.log('connected')
    let res = await db.sql("select 1 one, 2 two")
    console.log(res.rows);
    await db.close()
}

main()

It's quite simple at the moment, only supports SQL with no parameters, but should work I believe

Studio is already deprecated and works only on Windows

Accessing the management portal depends on how you installed IRIS and which version you used. Your installation may not have private web server installed, and you would need to install any webserver yourself.

run command:

iris list

it will output something like this

Configuration 'IRIS'   (default)
    directory:    /usr/irissys
    versionid:    2025.1.0L.172.0com
    datadir:      /usr/irissys
    conf file:    iris.cpf  (WebServer port = 52773)
    status:       running, since Sat Feb 15 06:34:51 2025
    SuperServers: 1972
    state:        ok
    product:      InterSystems IRIS

Here is WebServer port is what are you looking for, and the full URL will be http://localhost:52773/csp/sys/UtilHome.csp

If you don't have a private web server, check documentation, on how to configure separate web server

Tried to check 2025.1, but did not find it

$ docker inspect --format '{{ index .Config.Labels "com.intersystems.platform-version" }}' containers.intersystems.com/intersystems/iris:latest-{em,cd,preview}
2024.1.3.456.0
2024.3.0.217.0
2024.1.3.456.0

For some reason, the preview points to em version, why so?

For the community edition,

$ docker inspect --format '{{ index .Config.Labels "com.intersystems.platform-version" }}' containers.intersystems.com/intersystems/iris-community:latest-{em,cd,preview}
2024.1.2.398.0com
2024.3.0.217.0com
2025.1.0.204.0com

At least here it is 2025.1 version

but, I've noticed that em version now not the same as for enterprise edition

Anyway, where is 2025.1 for the Enterprise version? The community edition is too limited to check everything on it.

The post is two months old at this point, but it's not available.

I'm curious why would you need it, are there any issues when working via shared memory?

As already mentioned, you have some options, use a different host, or use DriverParameters

SharedMemory

Optional. Boolean indicating whether or not to always use shared memory for localhost and 127.0.0.1. Default = null. See IRISDataSource methods getSharedMemory() and setSharedMemory(). Also see “Shared Memory Connections”.

For instance, in DBeaver it may look like this, if there is no SharedMemory option visible, you still can add it manually