For performance reasons, it's possible to define an Index in a way, that some of the columns will be as part of the index itself, just for search, and some data could be in the data part of that index, which will be used to display in the result if requested. So, if your index is somehow corrupted, the SQL engine will expect the values there, and will look for it, so, it will not go to the place where the data originally seat. And as a result, you may not see some of the data in the output rows.

Sure, it’s possible to do so. React application is just a frontend side, and IRIS itself can be as a backend server. Or you can write backend server on some other language, e.g. NodeJS, Python, Java or .Net. Which will connect to IRIS as a database.

you can look at my Realworld project, in particular  realization of backend server. The project itself offers, the wide variety of frontends and backends on different languages, and with using different databases. So, you find React frontend which will work with backend on IRIS.

and look at my article about this project

Depends on what are you trying to achieve.

Import as is, with an iterator

Class User.Test Extends (%RegisteredObject, %JSON.Adaptor)
{

Property name As %String;

ClassMethod Import()
{
  Set data = [{
    "name": "test1"
  },
  {
    "name": "test2"
  }]

  Set iter = data.%GetIterator()
  While iter.%GetNext(.key, .value) {
    Set obj = ..%New()
    Set tSC = obj.%JSONImport(.value)
    Write !,obj.name
  }
}

}

Import with a wrapper object

Class User.TestList Extends (%RegisteredObject, %JSON.Adaptor)
{

Property items As list Of User.Test;

ClassMethod Import()
{
  Set data = [{
    "name": "test1"
  },
  {
    "name": "test2"
  }]

  #; wrap to object
  Set data = {
    "items": (data)
  }

  Set list = ..%New()
  Set tSC = list.%JSONImport(.data)

  For {
    set obj = list.items.GetNext(.key)
    Quit:key=""
    Write !,obj.name
  }
}

}

Jeffrey, thanks. But if I would have only 16KB blocks buffer configured and with a mix of databases 8KB (mostly system or CACHETEMP/IRISTEMP) and some of my application data stored in 16KB blocks. 8KB databases in any way will get buffered in 16KB Buffer, and they will be stored one to one, 8KB data in 16KB buffer. That's correct?

So, If I would need to separate global buffers for streams, I'll just need the separate from any other data block size and a significantly small amount of global buffer for this size of the block and it will be enough for more efficient usage of global buffer? At least for non-stream data, with a higher priority?

I've mentioned above a system with a significant amount of streams stored in the database. And just checked how global buffers used there. And streams are just around 6%. The system is very active, including files. Tons of objects created every minute, attached files, changes in files (yeah, our users can change MS Word files online on the fly, and we keep all the versions).

So, I still see no reasons to change it. And still, see tons of benefits, of keeping it as is.