go to post Julian Matthews · Jan 13, 2023 Hi Thomas. Are you trying to connect to the same API endpoint from Test and Live? Is it possible that the end point is performing IP filtering so your request from the Live server is being rejected?
go to post Julian Matthews · Nov 16, 2022 Hey Andy.When you're copying the router from your production, it will continue to reference the same rules class in the settings as per: After you have copied the Router, you will want to hit the magnifying glass and then use the "Save As" option in the Rule Editor. After you have done this, you can then go back to your Production and then change the rule assigned to your Router via the drop down to select the new rule. Just make sure you create a new alias for your Rule on the "general" tab on the rule page.
go to post Julian Matthews · Nov 9, 2022 Hey William. I'm pretty sure you just need to query the table Ens.MessageHeader. This should give you the process by way of the column SourceConfigName, and the status of the discarded messages. For example: SELECT *FROM Ens.MessageHeaderWHERE SourceConfigName = 'ProcessNameHere' AND Status = 'Discarded' You may want to consider including a time range depending on the size of the underlying database.
go to post Julian Matthews · Oct 28, 2022 I ended up extending EnsLib.HL7.Operation.TCPOperation and overwriting the OnGetReplyAction method. From there, I coped the default methods content but prepended it with a check that does the following: Check pResponse is an object Loop through the HL7 message in hunt of an ERR segment Checks value of ERR:3.9 against a lookup table If any of the above fail, the response message is passed to the original ReplyCodeAction code logic, otherwise it quits with the result from the Lookup Table. The use of the Lookup Table then makes adding/amending error actions accessible to the wider team rather than burying it within the ObjectScript, and having the failsafe of reverting to the original ReplyCodeAction logic keeps the operation from receiving an unexpected error and breaking as it has the robustness of the original method.
go to post Julian Matthews · Oct 25, 2022 Hey Patty. If you just simply need the empty NTE to be added in using the DTL, you can set the first field to an empty string to force it to appear. For example: Will give this: Note that my example is simply hardcoding the first OBX repetition of every first repeating field with no care for the content. You will likely need to do a for each where you evaluate if the source NTE:1 has a value, and then only set to an empty string if there is no content in the source.
go to post Julian Matthews · Oct 21, 2022 Hey Kev. The main way to build upon this would be to use something like Prometheus and Grafana to pull data out and then display it in a human readable way, and it has been covered on the forums a few times. However, if you were to upgrade past IRIS 2020, you should find that you are able to utilise System Alerting and Monitoring (SAM) in your environment.
go to post Julian Matthews · Oct 19, 2022 Mainly major version jumps unless something is problematic in the version that has ended up in our production environment. Last jump was 2019.1 to the current 2022.1 and I'm blaming the pandemic on no upgrades between those two releases
go to post Julian Matthews · Oct 19, 2022 Seeing as I just completed a production upgrade yesterday: What InterSystems products + versions are you running? ($zv is ideal.) IRIS for Windows (x86-64) 2022.1 (Build 209U) Tue May 31 2022 12:16:40 EDT [Health:3.5.0] What makes you decide to upgrade? New features + security fixes What are your blockers to upgrading? Bugs in new releases + being limited to the non-CD releases due to current configuration What is your process for evaluating and planning a possible upgrade? Install in our NPE and use the new version, run tests against the most heavily used elements What documentation resources do you use? Release notes + any upgrade guides that explicitly call out versions you can/can't upgrade from What gaps/issues do you see in our existing documentation around upgrades? It's a small thing, but a link to the release notes from the online distribution page on WRC would be greatly received alongside the release in question. What would make your InterSystems software upgrade process better? One step that always bothers me is the need to do a recompile post upgrade, as it's not been made quite clear to me at what stage this needs to be done when working in a mirrored environment. This could be a step handled by the installer given that it should happen with any major version change. What has made an upgrade fail? Not to hammer on at the same point, but I did have an upgrade "fail" due to a miscommunication about if the version change was major or minor, and we hadn't run the recompile of all namespaces. When have you had to roll-back? Never had to fully roll back, but have had to fall back to a secondary mirror after noting the upgraded mirror was having issues (see above). Otherwise we aim for a "fix forward" approach to issues found following an upgrade.
go to post Julian Matthews · Sep 28, 2022 So upon further review, it seems that the first ACK is being generated by the Operation, and the second one is the body of the HTTP Response. Basically, the operation will attempt to parse the http response into a HL7 message, and if that doesn't happen, it will then "generate" an ack and write the http response data at the end of the generated ack. In my case, although there is a HL7 message in the response, it's not being parsed for some reason, so the code moves onto generating its own ack, followed by the http response body, which is the second ack I was seeing. I'm now replicating the HTTP operation and attempting to pin down exactly where it's falling down, and failing that I will likely reach out to WRC as it seems to be an issue deeper than I can dive.
go to post Julian Matthews · Sep 27, 2022 Hey Lionel. I did write an article about this a little while ago which I hope can walk you through what you're looking to achieve, with the difference being to pivot from using the ORCGrp counts like I had, and instead using RXCgrp and then setting the transform to have a source and target class of RDE O11. If you do ty to follow this approach, I'm more than happy to answer any questions you have.
go to post Julian Matthews · Sep 21, 2022 Hey Marc. Thank you for sharing this - I have no idea how I have yet to come across this!
go to post Julian Matthews · Sep 21, 2022 If you have a record map configured and have the generated class, then the next step would be to use this generated class in a transform as your source. From there, you can transform the data into the SDA class you have set in your target for transform. Once you have this SDA message, you can send it wherever your requirements need you to send them.
go to post Julian Matthews · Sep 20, 2022 If you're working with a CSV, then you could look at the Record Mapper or Complex Record Mapper depending on the input format. From there, you could then create a transform that uses the generated class from the record mapper as your source, and then the appropriate SDA class as your destination. However, if you're working with an actual xls/xlsx file, then I'm out of ideas.
go to post Julian Matthews · Sep 20, 2022 Hey Marc. Firstly, thank you for sharing this. This does seem to closely follow what I had intended to use, with a slight variation or two. Would you mind giving some insight on this line "set tReadLen=..#CHUNKSIZE" as I'm not familiar with the use of the # symbol in this way. Is this acting as a modulo in this context?
go to post Julian Matthews · Aug 11, 2022 Your Process is most likely using ..SendRequestAsync() to send to the Operation and has "pResponseRequired" set to 1 (or not set at all, so it's using the default value of 1). There's nothing inherently wrong with this, but if you just want to send to the Operation and not worry about the response going back to your process, you could change the "pResponseRequired" flag to 0 in your call. So it would look a little like this: Set tSC = ..SendRequestAsync("TargetOperationName",ObjToSend,0) However you may wish to consider if this approach is appropriate to your setup, or if you would be better off using "SendRequestSync()" and dealing with the response synchronously.
go to post Julian Matthews · Aug 10, 2022 To parse the json, the below is a starting point for taking the content of the stream into a dynamic object, and then saving the value into its own variable. Set DynamicObject=[].%FromJSON(pRequest.Stream) Set Name = DynamicObject.name Set DOB = DynamicObject.DOB Set SSN = DynamicObject.SSN You could then store these wherever you need to. If your SQL table is external, then you could have your Operation using the SQL Outbound Adapter to then write these in your external DB.ETA: If you then need to pick out the values within the content of name (which I assume has come from a HL7 message) you could use $PIECE to pick out the data from the delimited string you're receiving.
go to post Julian Matthews · Jul 29, 2022 Hey Daniel. As a starting point, I would not be adding the trace when viewing the DTL in Studio, and instead I would add it when using the Data Transformation Builder: Which gives me: If this is not working for you, make sure that the router has "Log Trace Events" enabled in the router settings and the router has been restarted since enabling the trace. I have been caught out numerous times enabling the trace and then forgetting to restart the process/router in the production.
go to post Julian Matthews · Jul 21, 2022 The issue of no results being returned is likely elsewhere in your query. To test this, I created a basic table with the following: CREATE TABLE Demo.BBQ ( Name varchar(100), Type varchar(50), isActive bit) And I then added a few rows: Insert into Demo.BBQ (Name, Type, isActive)VALUES('Super Grill''s BBQ Hut','Outdoor',1) Insert into Demo.BBQ (Name, Type, isActive)VALUES('Bobs BBQ Bistro','Indoor',1) Insert into Demo.BBQ (Name, Type, isActive)VALUES('Rubbish Grill''s BBQ Resort','Not Known',0) This then gave me a table that looks like this (note that the double single quotes used in the insert are inserted as a single quotes into the table): If I then run a query using the like function: And if I want to exclude the inactive location: The use of doubling up a single quote to escape the character is not a Intersystems specific approach, but is generally a standard SQL way of escaping the single quote.