Perhaps the private webserver is running on 52773 inside the container and exposed as 52774 on the host? So from within the container you should connect to localhost:52773 instead of 52774.
The purpose of IRIS embedded source control features is to keep code changes made in the database synchronized with the server filesystem, to automate any source control provider-specific operations in a way that ensures that synchronization, and to provide concurrency controls for developers working in a shared environment (when relevant). In the days of Studio, all code changes were made in the database first, rather than on any filesystem, so you needed an embedded source control solution to get real source control at all. With client-side editing in VSCode, there are *still* some changes to code that are made "in the database first" - specifically, all the management portal-based graphical editors for interoperability and business intelligence. For such use cases, embedded source control is relevant even when you're developing against a local Docker container (which I'd consider modern best practice and prefer over a remote/shared environment where feasible) - otherwise, you need to jump through extra hoops to get your changes onto the client/server filesystem.
In a client-centric mode, it's totally fine to use git-source-control alongside the git command line, built-in VSCode tools, or your preferred Git GUI (GitHub Desktop, GitKraken, etc). However, this misses an important benefit of git-source-control: when you pull, checkout, etc., if you do it through the extension, we can automatically reflect the operation in IRIS by loading added/modified items and deleting items that have been removed in the database. If you make changes on the filesystem through one of these other channels, it's up to you to make sure things are reflected properly in IRIS.
Another benefit of git-source-control for local development is that when working across multiple IPM packages loaded from separate local repos, changes made via isfs folders will automatically be reflected in the correct repository. This is more natural for established ObjectScript developers especially (e.g.: "I just want to edit this class, then this other class in a different package") than a client-centric multi-root VSCode workspace, which could achieve the same thing but with a bit more overhead.
isc.ipm.js supports Angular and React currently; we'd welcome a PR or issue for Next.js. Probably not too much work needed to support it, it's just a matter of determining the build and static files layout.
Ahh, I see. Honestly, if you're looking to adjust the two things to work together, you might be better off writing an adapter layer between %UnitTest.TestCase and your existing test case parent class rather than trying to make TestCoverage work for your custom unit test framework.
Hi @Jani Hurskainen - the short answer is "yes, TestCoverage is coupled to %UnitTest."
Can you elaborate on what you have in mind by "other unit testing frameworks" and/or what you're trying to achieve? %UnitTest is the only unit testing framework for ObjectScript that I'm aware of. Is your objective to unify Python and ObjectScript unit tests?
I strongly relate to this. Zen was a huge part of what sold me on InterSystems tech 15 years ago when I started here as an intern - for all the reasons you've described - and if I want to throw together a really quick POC that just has results of a class query shown in a table, with maybe some basic interactions with the data, I might still use it.
That said, for my team's work and even for my own personal projects, I've found the combination of isc.rest and isc.ipm.js to be *almost* as quick as Zen. With something like Angular with an IRIS back-end (consisting of a bunch of %Persistent classes), you need to write: 1. REST APIs for all your basic CRUD operations, queries, and business logic 2. Client code to call all those REST APIs 3. Client code for all the models used in those REST APIs 4. The actual UI
Suppose you want to make a simple change to one of your models - say, adding a property to a class and making it available in the UI. With Angular, this probably means changes at all four levels; with Zen, you get to skip 1-3 entirely. That's compelling. An inevitable side effect of this is that your application's API surface (and therefore attack surface) is enormous and near-impossible to fully enumerate. It is possible to secure a Zen UI, but much easier to shoot yourself in the foot.
isc.rest makes (1) super easy - add a parent class to your %Persistent class and do a few easy parameter/method overrides to get CRUD and queries basically for free, and write a bit of XML if you want to do fancier things to expose business logic or class queries. This provides enough metadata to generate an OpenAPI spec, which can then be used to automate (2) and (3) with the help of openapi-generator. So while you can't skip 1-3 entirely, this toolset makes it all significantly faster.
Hi @Kwabena Ayim-Aboagye - zpm "generate" will create module.xml in a folder on the IRIS server, and if you specify the folder that holds your code it should discover the things that are already in that folder and add them to module.xml.
Sorry we missed that. I started to look around for best practices and forgot to circle back.
It's a fantastic question, and I think your gut feeling from https://github.com/intersystems/git-source-control/discussions/343 is correct - the local-to-the-server repo should be in a place accessible from all mirror members, provided you can do this in a way that doesn't introduce a single point of failure operationally.
If that location is unavailable, you won't be able to do development, but operations on the running instance shouldn't be impacted otherwise (and that location being unavailable would be something that needs to be fixed immediately anyway).
go to post
Perhaps the private webserver is running on 52773 inside the container and exposed as 52774 on the host? So from within the container you should connect to localhost:52773 instead of 52774.
go to post
Wrote a sample UDAF for this today: https://community.intersystems.com/post/writing-user-defined-aggregate-f...
go to post
Wrote a sample UDAF for this today: https://community.intersystems.com/post/writing-user-defined-aggregate-f...
go to post
Actually I just went ahead and filed the issue so this doesn't get lost:
https://github.com/intersystems/isc-ipm-js/issues/18
go to post
The purpose of IRIS embedded source control features is to keep code changes made in the database synchronized with the server filesystem, to automate any source control provider-specific operations in a way that ensures that synchronization, and to provide concurrency controls for developers working in a shared environment (when relevant). In the days of Studio, all code changes were made in the database first, rather than on any filesystem, so you needed an embedded source control solution to get real source control at all. With client-side editing in VSCode, there are *still* some changes to code that are made "in the database first" - specifically, all the management portal-based graphical editors for interoperability and business intelligence. For such use cases, embedded source control is relevant even when you're developing against a local Docker container (which I'd consider modern best practice and prefer over a remote/shared environment where feasible) - otherwise, you need to jump through extra hoops to get your changes onto the client/server filesystem.
In a client-centric mode, it's totally fine to use git-source-control alongside the git command line, built-in VSCode tools, or your preferred Git GUI (GitHub Desktop, GitKraken, etc). However, this misses an important benefit of git-source-control: when you pull, checkout, etc., if you do it through the extension, we can automatically reflect the operation in IRIS by loading added/modified items and deleting items that have been removed in the database. If you make changes on the filesystem through one of these other channels, it's up to you to make sure things are reflected properly in IRIS.
Another benefit of git-source-control for local development is that when working across multiple IPM packages loaded from separate local repos, changes made via isfs folders will automatically be reflected in the correct repository. This is more natural for established ObjectScript developers especially (e.g.: "I just want to edit this class, then this other class in a different package") than a client-centric multi-root VSCode workspace, which could achieve the same thing but with a bit more overhead.
go to post
isc.ipm.js supports Angular and React currently; we'd welcome a PR or issue for Next.js. Probably not too much work needed to support it, it's just a matter of determining the build and static files layout.
go to post
Ahh, I see. Honestly, if you're looking to adjust the two things to work together, you might be better off writing an adapter layer between %UnitTest.TestCase and your existing test case parent class rather than trying to make TestCoverage work for your custom unit test framework.
go to post
Hi @Jani Hurskainen - the short answer is "yes, TestCoverage is coupled to %UnitTest."
Can you elaborate on what you have in mind by "other unit testing frameworks" and/or what you're trying to achieve? %UnitTest is the only unit testing framework for ObjectScript that I'm aware of. Is your objective to unify Python and ObjectScript unit tests?
go to post
I just asked "How do I change my password in InterSystems Server Manager in VSCode?"
It gave me three answers: the first one was outdated, the second one was a bad idea, and the third was the right one.
go to post
This is old enough to post a spoiler now - I got down to 35 with the *i trick; what's the 34-character solution?
go to post
(but the solution is dependent on compilation flags - not sure if that invalidates it)
go to post
38 characters for me. :)
go to post
I got a question from an intern having trouble writing out to a specific file using %Stream.FileCharacter, and thought I'd see how DC AI would do: https://community.intersystems.com/ask-dc-ai?question_id=150746
Not bad! Not quite the way I recommended, so it's just a reminder to give the DC better input. ;)
go to post
Thank you for the writeup @Shuheng Liu!
go to post
Not much to add other than the same pointer.
go to post
I strongly relate to this. Zen was a huge part of what sold me on InterSystems tech 15 years ago when I started here as an intern - for all the reasons you've described - and if I want to throw together a really quick POC that just has results of a class query shown in a table, with maybe some basic interactions with the data, I might still use it.
That said, for my team's work and even for my own personal projects, I've found the combination of isc.rest and isc.ipm.js to be *almost* as quick as Zen. With something like Angular with an IRIS back-end (consisting of a bunch of %Persistent classes), you need to write:
1. REST APIs for all your basic CRUD operations, queries, and business logic
2. Client code to call all those REST APIs
3. Client code for all the models used in those REST APIs
4. The actual UI
Suppose you want to make a simple change to one of your models - say, adding a property to a class and making it available in the UI. With Angular, this probably means changes at all four levels; with Zen, you get to skip 1-3 entirely. That's compelling. An inevitable side effect of this is that your application's API surface (and therefore attack surface) is enormous and near-impossible to fully enumerate. It is possible to secure a Zen UI, but much easier to shoot yourself in the foot.
isc.rest makes (1) super easy - add a parent class to your %Persistent class and do a few easy parameter/method overrides to get CRUD and queries basically for free, and write a bit of XML if you want to do fancier things to expose business logic or class queries. This provides enough metadata to generate an OpenAPI spec, which can then be used to automate (2) and (3) with the help of openapi-generator. So while you can't skip 1-3 entirely, this toolset makes it all significantly faster.
go to post
Hi @Kwabena Ayim-Aboagye - zpm "generate" will create module.xml in a folder on the IRIS server, and if you specify the folder that holds your code it should discover the things that are already in that folder and add them to module.xml.
go to post
Sorry we missed that. I started to look around for best practices and forgot to circle back.
It's a fantastic question, and I think your gut feeling from https://github.com/intersystems/git-source-control/discussions/343 is correct - the local-to-the-server repo should be in a place accessible from all mirror members, provided you can do this in a way that doesn't introduce a single point of failure operationally.
If that location is unavailable, you won't be able to do development, but operations on the running instance shouldn't be impacted otherwise (and that location being unavailable would be something that needs to be fixed immediately anyway).
go to post
You might want to submit that here: https://community.intersystems.com/post/3rd-intersystems-ideas-contest
go to post
I like this in combination with a global mapped to %ALL that has:
^SYS("ConfigDatabaseOverride","BAR")="^^:ds:BARCONFIG"
So prior to references to the possibly-mapped global, you'd look to see if there's an override for the current namespace, and if there is, use it.