The only way to get it worked with CSP files right now, is to configure isfs. In this case, you will edit any files only remotely, with no local files.
You need file with code-workspace extension and content like this.

{
"folders": {
"uri": "isfs://yourserver/csp/user?ns=USER&csp"
}
}

More details, on how to configure and work with isfs, in the documentation

And just a note, for one more issue. The description to method %JSONNew says, that I can pass JSON which will be imported to the just created object

/// Get an instance of an JSON enabled class.<br><br>
/// 
/// You may override this method to do custom processing (such as initializing
/// the object instance) before returning an instance of this class.
/// However, this method should not be called directly from user code.<br>
/// Arguments:<br>
///     dynamicObject is the dynamic object with thee values to be assigned to the new object.<br>
///     containerOref is the containing object instance when called from JSONImport.
ClassMethod %JSONNew(dynamicObject As %DynamicObject, containerOref As %RegisteredObject = "") As %RegisteredObject [ CodeMode = generator, GenerateAfter = %JSONGenerate, ServerOnly = 1 ]
{
    Quit ##class(%JSON.Generator).JSONNew(.%mode,.%class,.%property,.%method,.%parameter,.%codemode,.%code,.%classmodify,.%context)
}

But in fact, it does does nothing with it, and generated code, just returns new object

%JSONNew(dynamicObject,containerOref="") public {
  Quit ##class(Conduit.Model.User).%New()
}

Yeah, I know that I can do %FromJSON, but it looks like overhead, here.

Look at this article, you can generate API implementation just from swagger specification. It generates a bunch of methods for each call in swagger spec.

Something like this

/// Get an article. Auth not required<br/>
/// The method arguments hold values for:<br/>
///     slug, Slug of the article to get<br/>
ClassMethod GetArticle(slug As %String) As %DynamicObject
{
    //(Place business logic here)
    //Do ..%SetStatusCode(<HTTP_status_code>)
    //Do ..%SetHeader(<name>,<value>)
    //Quit (Place response here) ; response may be a string, stream or dynamic object
}

So, in this method, I would add code like this

/// Get an article. Auth not required<br/>
/// The method arguments hold values for:<br/>
///     slug, Slug of the article to get<br/>
ClassMethod GetArticle(slug As %String) As %DynamicObject
{
    Set article = ##class(Article).slugOpen(slug,, .tSC)
    If $$$ISERR(tSC) {
        Do ..%SetStatusCode(404)
        Return 
    }
    
    Return article
    #; Or
    Return article.%JSONExport()
}

But this will not work. The only ways to make to work is to return string or stream

Return article.%JSONExportToString()
Return article.%JSONExportToStream()

But, I have to wrap the output. And the best would be to get something like this

Return { "article": (article.%JSONExport()) }

While I have to write this, and hope do not get MAXSTRING error, for some cases

Return "{ ""article"": " _ article.%JSONExportToString() _ "}"

And it's just only a case with one object, while for some cases I have to return an array.

Definitely, something wrong in the configuration. Code in InterSystems in fact is no different from any other data stored there. So, you may have some wrong mappings, or store some of your code in %SYS. 

I have a configuration with mirroring + ECP, and it works perfectly, I don't even care which of the nodes is primary, and can switch it any time, with no issues. And I have even more than one Code databases, and more than 20 Data databases. Mirroring Nodes works on 2018.1 while ECP Application Servers on 2012.2, with no issues.

If you have some doubts about your configuration, you can ask for help through WRC, or we can help you with it, we can review your settings, and say what actually happened and how to solve it

I would suggest a few points, how to catch what's going on wrong with the unexpected growth of databases, it's not actually really matter iristemp/cachetemp or some other.

  • zn "%SYS" Do ^GLOBUFF - will show you the biggest globals in the Global Buffer, useful tool to inspect slowness as well. If you see that some global from iristemp uses too much of global buffer, you can use it to investigate further
  • Integrity check for growing database, if the size not too big, it's the simplest way to understand the sizes of globals in the database.

When you found exact global name, you now have a point where to start an investigation.

If you stopped IRIS, you can just delete IRISTEMP, it will be recreated as a new in any way, after the start.

Sorry for that, could you please fill the issue here? This check needed to be sure, that file on the server has not been updated while edited in VSCode. And may not work perfectly, yet, unfortunately.

As a workaround, I would suggest using import right in Caché/IRIS with a common way through $system.OBJ.Load/ImportDir

It's In fact not a common task for an editor, to import big amounts of files. So, the best behavior you'll get if you compile it separately from the editor.

I have not been worked with Visual Studio, so, have no idea how it works there. But Instead of Studio, I would recommend using VSCode, of course, it already supports debugging, for sure you can watch variables (and expressions) as well as hovering variables. It does not support objects in a some readable way but supports it as an expression, so obj.property will work.

Unfortunately, for some reasons some systems may not use the latest versions of InterSystems products. While Integrity checks in some cases can be used on lower versions. And from the other side, for some systems reversing the system to some backup can be used as only last chance to restore the data, due to the sensitivity of stored data and the impossibility to restore data since the latest backup. So, If I would find database degradation I would better attempt to recover it, fortunately, I have an experience, and possibly lose some data, but the amount of lost data will be significantly less then when I would use a backup. Around 100 hundred GB journals per day, with tens of terabytes of data supposed for backup, make the task to restore system quickly as impossible for a system that has to be available with no downtime.

In that particular case, we have 16k blocks, due to the past issues with caching big string blocks over the ECP. 

But I think, there are a few ways how integrity can be improved, for such cases. I see at least two reasons why we should check integrity periodically.

  •  We don't have any errors in the database, which may cause to a system failure.
  • We don't have any issues in the database, and we ensure that our data is completely available.

I'm faced with an issue when error on a pointer level causes issues with WriteDaemon, and our system just died, when the application tried to get access to the data. And it took some time to figure out why it has happened, even when we did not have any issues with database at all, just only with ECP. That happened in the version 2012.2. And I'm thinking I would be able to set how deeply I could scan blocks, let's say, don't care about data blocks, just scan only pointers blocks. I don't have proportions, but I'm sure that in most cases we would have much more data blocks than pointers blocks. So, it would make integrity check give some results faster.

I know quite well how the database looks inside. But I did not manage, yet to look at how database backups work, and mostly interesting incremental backups. As I know backup works with blocks, so, maybe there is a way to make incremental integrity checks as well. It will not help to find the issues that happened in unchangeable blocks due to hardware issues but could say, that lately changed data is Ok.