$order cached? Delete Cache of SQL query/$order called in an API.
Hello Intersystems-Community,
I build an API and there I use $order to get the Data from my Globals.
When I first time use that API it's get an error (504 Gateway-Timout > 60 Sec.).
After 2 reloads it works with a loading time of 2 Sec.! Sounds for me like the $order is cached and it works faster. Is it true $order will be cached?
When it works, I only get the error at the next day. So it's very hard to reproduce that error. Is it possible to delete the cache for $order?
Example:
set a = "" for { set a = $order(^SD031S("0","7",x,a)) if (a = "") {quit} set b = "" for { set b = $order(^SD031S("0","7",x,a,b)) if (b = "") {quit} if ($piece($get(^SD031("0",a,b,1),""),Y,9) = x) { set c = a set d = "" for { set d = $order(^FA060S("0","22",c,b,c,d)) if (d = "") {quit} if ($piece($get(^FA060("0",c,d,2),""),Y,25) = b) { #dim innerObject = ##class(%ZEN.proxyObject).%New() set articleList = ##class(test.test).getArticle("SB02906",c,d,"0","1") set dataLength = 0 set dataLength = artcleList.Count() set innerObject.d = d set innerObject.c = c set innerObject.dataExist = $case(dataLength, 0:0, :1) do responseList.Insert(innerObject) } } } } } } set object.response = responseList set object.result = $$$YES set object.message = "" set x = ##class(%ZEN.Auxiliary.jsonArrayProvider).%WriteJSONStreamFromObject(.json,object) return $$$OK
Thanks and Best regards
Florian Hansmann
Almost right!
not $order but the related global blocks are cached in global buffers.
large global buffers improve performance but they have to be loaded first.
and disk access is the slowest part in your machine (except network)
so at 1st access, you load the buffers and that might be slow.
any further access uses the loaded buffers
Hey Robert,
thanks for the fast answer. Do you know if it's possible to delete these global buffers?
So I can reproduce that problem again and again. It would be much easier to find out, what makes the request so slow.
As these buffers are a (hopefully large) pool that belongs to your whole installation.
They are only cleared if you restart Caché.
But you should probably take a look at your "disk drives" (whatever type they are).
Their performance might be worth to check
[in quotes as they might be virtual in some way]
In addition, a performance check with WRC might make sense.
They are top experts to the subject.
If you have a large buffer pool you might run a $QUERY() loop across the whole global
in the morning and hope it stays there long enough. Some installations practice it that way.
Restarting is the easiest way.
If you can't just overwrite global buffer with another global (but check that your target global is really flushed out of global buffer).
Dismount/mount database invalidates cache for the globals from this database.
great! That's faster and requires no restart.
I'm not sure if I'm correct here but I think you can eliminate 2 for-loops and the call to getArticle():
In SD031 there should be only one key for "x" (should be a 1:1 relationship), also try to avoid setting up a list just to .count() to see if "dataExists". You should use something like $data (in ^SB02906 there could be one dataset for every item you sale, for every customer). That should speed everything up a little.
My approach:
if (a '="") {
set b = $order(^SD031S("0","7",x,a,b))
if (b '= "") {
if ($piece($get(^SD031("0",a,b,1)),Y,9) = x) {
set c = a
set d = ""
for {
set d = $order(^FA060S("0","22",c,b,c,d))
if (d = "") { quit }
if ($piece($get(^FA060("0",c,d,2),""),Y,25) = b) {
#dim innerObject = ##class(%ZEN.proxyObject).%New()
set innerObject.d = d
set innerObject.c = c
set k = $o(^SB02906("0",c,d,""))
if (k'="") {
set innerObject.dataExist = ($d(^SB02906("0",c,d,k,"EUR","0","1"))'=0)
} else {
set innerObject.dataExist = 0
}
do responseList.Insert(innerObject)
}
}
}
}
}
Also that caching problem should not exist in production because FA060 and SD031 are read for almost everything in your ERP so it stays in the cache. Otherwise you probably need to create a background-job that reads a few global-nodes every 5-10 minutes to get it cached.
You should not always rely that all data is in cache and processing of the data can be done before a timeout occurs. (data will grow, more users can use the system, ...)
For REST queries that have the risk of running too long, you can do as follows :
- the api responds immediately with a queryId (some sequential number),
- the query continues to run in the background, and the response of the query will be stored in some table/global/... with the queryId as the key
- a separate api call is used by the client (with the queryId) to get the response (or a 'not ready' status when the query is still running).
It is a little more work on the client to call once to launch the query, and a second.. time to get the results, but you are safe when your systems grows with data or users