go to post Sean Connelly · May 31, 2017 Excellent, just tried it and the value is accessible via ReturnTypeParams in the %Dictionary.CompiledMethod table.
go to post Sean Connelly · May 31, 2017 OK, excellent thanks for that. Seem to remember hitting a brick wall trying to get this to work many moons ago.
go to post Sean Connelly · May 31, 2017 Hi Alexander,Unless I am missing a cool trick, you can't do this directly...Property DateOfBirth As %Date(JSONNAME = "BirthDate");you would need to extend %Date with your own class type and add the JSONNAME parameter to it, which means you end up with...Property DateOfBirth As Cogs.Lib.Types.Date(JSONNAME = "BirthDate");Which for me feels much more cumbersome, not to mention that developers are forced to change all of their existing code as well as amend any existing overriden data types that they use. Unless I am missing another trick, I'm pretty sure you can't add these attributes to complex types which if I am right is a show stopper anyway.Annotations are just much easier to work with, I need them for methods as well so it just seems more in keeping to do it all this way.Sean.
go to post Sean Connelly · May 31, 2017 Hi Rubens,I designed the solution around the real life use cases that I hit in my mainstream work.In most instances I am handling JSON to and from a browser and I have never had a use case where the JSON is over Cachés long string support of 3,641,144 characters. Thats with the exception of wanting to post a file with JSON. In that instance I have some boiler plate code that sends them as multiparts and joins them back together after the main JSON parse.With those decisions made it was just a matter of writing very efficient COS code that processed long strings. A couple of years ago the serialiser and deserialiser classes stacked up pretty big. In this latest version they are an uber efficient 90 and 100 lines of code each.There is no AST magic going on, just projection compilation with inspection of the dictionary. A small lib to abstract the annotations and various code generator tricks to bake in type handlers and deligators.Where data might go over 3,641,144 characters is parsing data backwards and forwards with Node.JS or another Caché server. In this instance the data is almost always going to be an array of results or an array of objects. For the later there is a large array helper class I am working on that will split out individual objects from a stream and then handle them as long strings. This will be part of the Node package.In the few fringe cases where someone might be generating objects larger than 3,641,144 characters then it wouldn't be to hard to have stream varients. I used to have these, but dropped them because they were never used. But I would keep the string handler variants as the primary implementations as they prove very quick.As for older Caché instances, I have had to support JSON as long as 8 years ago and still see the need for backwards compatibility.Sean.
go to post Sean Connelly · May 30, 2017 As Daniel has said.Plus, you might want to unit test them, in which case you need to create an instance of your web service class and call its instance method, e.g.set service=##class(MyApp.MyService).%New()set result=service.Test()where MyApp.MyService is the name of your web service class, and Test() is the instance method you want to call.
go to post Sean Connelly · May 30, 2017 Hi Ranjith,This is a very good question, and the opposite of one asked a week ago...https://community.intersystems.com/post/xml-json-ensembleCaché has varying degrees of support for JSON which will depend on the version of Caché that you have.Firstly, you will not find a one step solution to your problem inside of Caché.It's important to note that there is an impedance mismatch between JSON and XML that can produce different results in a one step solution. If you really don't care about this, or the exact format of the XML then I can point you towards...http://www.newtonsoft.com/json/help/html/ConvertingJSONandXML.htmwhich is a .NET solution. You could create a simple .NET object that wraps and calls this conversion utility. You can then bind to that .NET object using these instructions...http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...making the utility feel as if it was local Cache object / function.There are Java alternatives which you can google for, for which you would use the Java binding...http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...A two step conversion which will require a little more coding, but will enable you to control your XML output exactly as you want it.First you will need to convert the JSON into an internal object. If you are on 2016.1 or greater then please take a look at this article...https://community.intersystems.com/post/introducing-new-json-capabilitie...You can use the %Object to ingest JSON into a generic object.From here you will need a concrete class that will be used to generate your XML. Make sure it extends %XML.Adapter. It will then be a process of mapping each property from the generic object to the concrete object. Finally call its XML to string / stream method and you will have well formed and consistent XML.If you are on an older version of Cache then take a look at the %ZEN.Auxiliary.jsonProvider class which has a %ConvertJSONToObject, apparently its much slower than the newer object which might factor in your solution. I've never used this method myself, but would think you will end up with a very similar solution to the newer %Object.Sean.
go to post Sean Connelly · May 30, 2017 Hi Shobha,Ensemble logs a great deal of metrics that you can use to determine all sorts of timings.If you look at the header of any message you will see the time it was created and the time it was processed. This will give you the individual times taken for that message in its service or process to complete.To get an end to end time you will need to know when the operation completed its task.In your instance you can enable "Archive IO" which you will find under "Development and Debugging" in your file out operation settings. This will record the time received and responded. If you take one of these times and subtract the created time of the service message then you will have an overall benchmark.Note that in dev you will probably see very little or no lag between each message stage. However when you get to a live environment this lag can increase from milliseconds to seconds depending on load. Therefore its best to take any benchmark on dev as just a best scenario.Sean.
go to post Sean Connelly · May 26, 2017 Just for good measure, I benchmarked Vitaliy's last example and it completes the same test in 0.344022, so for out and out performance a solution built around this approach is going to be the quickest.
go to post Sean Connelly · May 26, 2017 OK, I got the third example working, needed to stash the dirs as they were getting lost.Here are the timings...Recursive ResultSet = 2.678719Recycled ResultSet = 2.6759Recursive SQL.Statement = 15.090297Recycled SQL.Statement = 15.073955I've tried it with shallow and deep folders with different file counts and the differential is about the same for all three.The recycled objects surprisingly only shave off a small amount of performance. I think this is because of bottlenecks elsewhere that over shadow the milliseconds saved.SQL.Statement 6-7x slower that RestulSet is a surprise, but then the underlying implementation is not doing a database query which is where you would expect it to be the other way around.The interesting thing now would be to benchmark one of the command line examples that have been given to compare.
go to post Sean Connelly · May 26, 2017 > I don't recommend opening %ResultSet instances recursively.Agreed, but maybe splitting hairs if only used once per process> It's more performatic if you open a single %SQL.Statement and reuse that.Actually, its MUCH slower, not sure why, just gave it a quick test, see for yourself...ClassMethod GetFileTree(pFolder As %String, pWildcards As %String = "*", Output oFiles, ByRef pState = "") As %Status{ if pState="" set pState=##class(%SQL.Statement).%New() set sc=pState.%PrepareClassQuery("%File", "FileSet") set fileset=pState.%Execute(##class(%File).NormalizeDirectory(pFolder),pWildcards,,1) while $$$ISOK(sc),fileset.%Next(.sc) { if fileset.%Get("Type")="D" { set sc=..GetFileTree(fileset.%Get("Name"),pWildcards,.oFiles,.pState) } else { set oFiles(fileset.%Get("Name"))="" } } quit sc} ** EDITED **This example recycles the FileSet (see comments below regarding performance)ClassMethod GetFileTree3(pFolder As %String, pWildcards As %String = "*", Output oFiles, ByRef fileset = "") As %Status{ if fileset="" set fileset=##class(%ResultSet).%New("%Library.File:FileSet") set sc=fileset.Execute(##class(%File).NormalizeDirectory(pFolder),pWildcards,,1) while $$$ISOK(sc),fileset.Next(.sc) { if fileset.Get("Type")="D" { set dirs(fileset.Get("Name"))="" } else { set oFiles(fileset.Get("Name"))="" } } set dir=$order(dirs("")) while dir'="" { set sc=..GetFileTree3(dir,pWildcards,.oFiles,.fileset) set dir=$order(dirs(dir)) } quit sc}
go to post Sean Connelly · May 26, 2017 I've removed the recycled resultset example, it is not working correctly. Might not work at all as a recycled approach, will look at it further and run more time tests if it works.In the mean time, my original example without recycling the resultset, on a nest of folders with 10,000+ files takes around 2 seconds, where as the recycled SQL.Statement example takes around 14 seconds.
go to post Sean Connelly · May 26, 2017 Hi Evgeny,I have a JSON-RPC library that does something similar to RESTForms. My approach is to have a higher level save method where I can create / modify the object and then call its actual save method.The problem with modifying the object inside OnBeforeSave() is that the modifications will occur after the validation has been called. The danger being that duff data could get persisted.The OnAddToSaveSet() will trigger before the validation is called which will bubble up any validation errors correctly.Alternatively...Property CreationDate As %Date [ InitialExpression = {+$h} ];Sean.
go to post Sean Connelly · May 26, 2017 ClassMethod GetFileTree(pFolder As %String, pWildcards As %String = "*", Output oFiles) As %Status{ set fileset=##class(%ResultSet).%New("%Library.File:FileSet") set sc=fileset.Execute(##class(%File).NormalizeDirectory(pFolder),pWildcards,,1) while $$$ISOK(sc),fileset.Next(.sc) { if fileset.Get("Type")="D" { set sc=..GetFileTree(fileset.Get("Name"),pWildcards,.oFiles) } else { set oFiles(fileset.Get("Name"))="" } } quit sc}Search All...set sc=##class(Some.Lib.File).GetFileTree("c:\Temp",,.files)Search for specific file type...set sc=##class(Some.Lib.File).GetFileTree("c:\Temp","*.html",.files)Search for multiple files typesset sc=##class(Some.Lib.File).GetFileTree("c:\Temp","*.html;*.css",.files)
go to post Sean Connelly · May 25, 2017 Hi Sam,Is this what you are looking for...if ##class(Ens.Util.FunctionSet).Lookup("MyTable",key)["foo"with the contains [ operator?
go to post Sean Connelly · May 25, 2017 Nice solution, not seen RunCommandViaCPIPE used before.Looking at the documentation it says..."Run a command using a CPIPE device. The first unused CPIPE device is allocated and returned in pDevice. Upon exit the device is open; it is up to the caller to close that device when done with it."Does this example need to handle this?Also, do you not worry that its an internal class?
go to post Sean Connelly · May 23, 2017 Thanks for replying.Can we assume this will not be back ported?
go to post Sean Connelly · May 18, 2017 The Ensemble operation would definitely need a custom adapter written.Probably wouldn't be that hard, just a bit time consuming getting all of the message frames working over TCP and handling fringe issues such as stale buffer data.The only nag is that implementing this via one operation would be an anti pattern. Unsolicited ROS messages should really be entering Ensemble via a service.However, there is a quick win that would solve this.Create a small node process that would multiplex ROS web socket messages between an inbound HTTP service and an outbound HTTP operation. ROS will think its talking to a socket client whilst Ensemble will thinks its talking to a pub sub over HTTP. Importantly you would get a bi-direction flow of messages in and out of Ensemble in the right directions.Node has the battle tested libs to translate between the two. At tops probably 100 lines of node glue code to get this working. Leveraging on Ensembles standard HTTP adapters means this should all be stable out of the box.
go to post Sean Connelly · May 18, 2017 Hi Shobha,I've just had a couple of attempts using the same wsdl and package name that you are using, as well as trying a few variations in the settings.On each attempt the wizard outputs a successful generation for me and I was unable to re-create the error that you are seeing.The problem might be related to your specific version of Ensemble, or there could be something odd going on with dictionary mappings that needs to be looked at.I would recommend contacting your local Intersystems support for more help on this matter.Sean.
go to post Sean Connelly · May 18, 2017 Hi Tom,The clue is in the error message... <text>ERROR #5002: Cache error: <UNDEFINED>zTestOperation+1^Test.WebService.1 *sc</text>You have an undefined variable, and its name is sc, denoted by the *If you look at your code...if $$$ISERR(sc) do ..ReturnMethodStatusFault(sc)You can see that you are checking the value of sc which has not yet been set.Sean.