Here is the procedure sample:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

 

ClassMethod CalcAvgScore(firstname As %String,lastname As %String) [sqlproc]
{
  New SQLCODE,%ROWID
  &sql(UPDATE students SET avgscore = 
    (SELECT AVG(sc.score) 
     FROM scores sc, students st
     WHERE sc.student_id=st.student_id 
       AND st.lastname=:lastname
       AND st.firstname=:firstname)
     WHERE students.lastname=:lastname
       AND students.firstname=:firstname)

  IF ($GET(%sqlcontext)'= "") {
    SET %sqlcontext.%SQLCODE = SQLCODE
    SET %sqlcontext.%ROWCOUNT = %ROWCOUNT
  }
  QUIT
}

 

Instead of using SQL to define PROCEDURES, even though you can, it's easier to create one using your own class. Just declare it as [ SqlProc] and it'll be available to use inside SQL. You can use that way to define a SQL function as well.

Is the file using BOM? If so you can check the header for the following signature: EF BB BF


This can be described as: $c(239, 187, 191)

Now keep in mind that most of editors abandoned the use of BOM in favor of digraphs and trigraphs detection heuristics as a fallback, yes, fallback. Because many assume you're already working with UTF-8 and won't work well with some charsets neither output BOM characters unless you tell it to use the desired charset.
 

You can try checking it against the US-ASCII table that goes from 0 to 127 code points, however that still wouldn't be 100% assertive about the stream containing UTF-8 characters.

I develop using a mix of Caché Studio with Visual Studio Code.
I use Visual Studio Code for dealing with front-end code, while using Caché Studio for back-end.

I don't use Caché Studio to edit static files.
 

I'm actually doing experiments using my Port library for managing export/import operations.

 

About how I keep the server code close to it's client counterpart is quite simple. By default Port exports project following the template /CacheProjects/{NAMESPACE}/{PROJECT}, so instead of depending on it, I overwrite that path to /my-repo/server.

From this point exported files will follow:

/my-repo/server/cls/My/Class.cls

/my-repo/server/cls/Another/Deep/Package/Whatever.cls

/my-repo/server/int/myroutine.int
/my-repo/server/mac/myroutine.mac

/my-repo/server/dfi/mydef.dfi

/my-repo/server/int/myinclude.inc

And so on, for every recognized Caché file format.

Now notice that I didn't said anything about static files. That's where a module bundler like Webpack is used to orchestrate the client-side workflow.

Now Caché only needs to send readable data back to the SPA (preferably JSON using %CSP.REST).

When the project repo  reaches a milestone. I build a release to actually export the files to the path, like this:
 

/my-repo/server/web/app/bundle-[chunkhash].js

/my-repo/server/web/app/bundle-[chunkhash].css


Since  [chunkhash] is unique per build, the consumer shouldn't have any issues with browser cache.

Now there's an issue: the bundled files still aren't inside the CSP folder, so I need to import the project back to Studio using Port.
Files are imported using UDL instead of XML. But Port always keep a project XML up-to-date along with the UDL code.

As you can see I can work with Caché code AND Client-code, alternating between both editors, thus keeping their own development scope, even though their code remain inside the same repo.

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...
Check the "Configuring an ECP Application Server" section. However, I advise you to read everything.

Remote databases should be created on the client instance (A). They represent the connection with the local database on the server instance (B).

If you follow the steps from the guide, when creating a remote database you'll notice that the wizard will ask you for a server, that's where you should select the ECP server you just defined.

Now create a local database on the client instance if you need to store something on it (like, let's say... code). And finally create a namespace, this is where you can define where to use local or remote database (GLOBALS AND/OR ROUTINES).

CAUTION: If you configure globals to use the remote database, any changes you do on the client instance will reflect on it's provider counterpart. The same applies for routines.

EDIT: Answering your question. No, as I far as I know, using this method you can't have more than one remote database for each global or routines. But I think you can workaround it by defining another namespace, from another remote database that links a local database on instance C and map what you want.

But you'll be responsible for dealing with name conflicts.

cls/My/Deep/Class.cls

I don't think subdirectories should be applied for routines, originally routines aren't supposed to have submodules or subpackages and dots might be part of their name. Also if you need some complexity, you wouldn't prefer using routines but classes to keep things organized.
 

I DO NOT recommend using src, unless you want to mix both back-end and front-end code. 
Or you want to keep the server code in a separated repository.
Here is a scaffolding based on React for better understanding.

my-app-project /
   package.json
    server
          cls
          mac
          int
          csp <- this is our build path, nothing should be added manually because everything is handle by the bundler.

    scripts /

         test.js
         build.js
        dev.js
   config /
         webpack.config.dev.js
         webpack.config.prod.js
         webpack.config.test.js
   src /

         components
               Header
                   index.js
                   Header.js                   
                   HeaderSearchBar.js
               Footer

                   index.js
                   Footer.js
                   FooterCopyright.js
                AppContainer
                    index.js
                   AppContainer.js
          containers
                App.js
           tests
               components

                  Header
                      Header.js                   
                      HeaderSearchBar.js
                  Footer
                      Footer.js
                      FooterCopyright.js
                   AppContainer
                       index.js
                   AppContainer


                              
You can use folders to separate both client and server codes inside the same project. You can even structure your project

using a monorepo approach if you want to keep multiple application modules together.
                  

Now since React will be using webpack's hot module reloading along with webpack-dev-middleware that builds everything within the memory, your Caché server should only work following SPA conventions and providing consumable data.

There's a catch though, whenever the developer builds a new version (using webpack.config.prod), it's mandatory to delete the older bundle and import the project back to Caché to keep the source in sync on the server and the project.
     

Use #server if you want to wait for a response, but be warned though that JavaScript is one threaded, and using #server with a left-hand side (LHS) variable would lock the current thread.

If you don't specify a LHS you can continue using #call, that will inform the CSP Gateway to execute the request asynchronously.

More details here: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...


If you need something closer to a callback then must do your callback on the server using &js< /* your javascript code here */ >. This way the server will return a "runtime" JavasScript to execute remaining operations on the client side.
 

Check my project. This might give you some help planning how to mimic package-to-folder exportation.

https://github.com/rfns/port

Now this is just an idea: I think you should make an API for serializing the JSON available. I mean, not only embed the serializer by extending the target class but also providing a way to serialize without extensions.

There might have cases where the user might need to use the same class for other JSON structures and with the current implementation he could be locked to it since he can't change parameter values on-the-fly.

 

However that implies on implementing a selector system to apply the same effect like each parameter does. So I don't think it's something for now, but you might want to consider it.

Greetings Skip,

Just so you know. Unless you aren't using a version that already supports JSON, I can say that %DynamicAbstractObject's %FromJSON and %ToJSON is from far the fastest parsing/serializing implementation. It's about to 4x ~ 8x faster than my lib.


Otherwise I see no problem on open sourcing this library. But before I do that, I need to remove business related implementations.

Try generating an error with code 5001 ($$$GeneralError, "your custom message").
This will give you an "#ERROR 5001: your custom message",  $replace that "#ERROR 5001:" part to "" using GetErrorText.

Note that if you have multiple errors inside a single status you'll need to fetch and $replace them individually.

That's because I don't think you can generate custom error codes. I would delegate the errors to your application layer instead.

Nice approach.

I got almost everything I need, except the property name. When the column is not aliased I can take it's name normally from label and colName.

However it breaks when SQL column is aliased, both colName and label refers to the same aliased column.

By the way, you can get it using:

resultset.%GetMetadata().columns.GetAt(index).colName // and label.

Notice that columns must be an instance of %ResultSet.MD.Column.