That works only for CSP context and CSP pages. You can write a wrapper I suppose, but I think it would be easier to just write your own querybuilder code:

ClassMethod Link(server = "www.example.com")
{
    try {
        set cspContext = $data(%request)
        if 'cspContext {
          set %request = {} // ##class(%CSP.Request).%New()  
          set %response = ##class(%CSP.Response).%New()
          set %session = {} //##class(%CSP.Session).%New(-1,0)
        }
        set query("param") = 1
        set page = "/abcd.csp"
        set url = ##class(%CSP.Page).Link(page,.query)
        set url = $replace(url, page, server)
        write url
        kill:'cspContext %request,%response,%session
    } catch {
        kill:'$g(cspContext) %request,%response,%session
    }
}

With querybuilder:

ClassMethod Link(server = "www.example.com")
{
    set query("param") = 1

    set data = ""
    set param = $order(query(""),1,value)
    while (param'="") {
        set data=data _ $lb($$$URLENCODE(param)_"="_$$$URLENCODE(value))
        set param = $order(query(param),1,value)          
    }
    write server _ "?" _ $lts(data, "&")
}

You're doing two separate operations:

  1. Syncing the data
  2. Syncing the cube

They can both be system tasks with one task dependent on other or even one task altogether.

If you're using persistent objects to store data you can specify DSTIME:

Parameter DSTIME = "AUTO";

and  ^OBJ.DSTIME would be maintained automatically.

UPD. Read your other comment. DSTIME  is relevant only for syncing. It does not affect full build behaviour.

For higher performance it's better to keep the data in InterSystems platform and sync it with remote db periodically.

To download the data via xDBC you have two main approaches:

  • Interoperability (Ensemble) SQL inbound adapter
  • "Raw" access via %SQLGatewayConnection or %Net.Remote.Java.JDBCGateway

Interoperability approach is better as it solves most problems and user should just enter the query, etc. "Raw" can be faster and allows for fine-tuning.

Now, to keep data synced there are several approaches available:

  • If source table has UpdatedOn field, track it and get rows updated only after last sync.
  • Journals: some databases provide parsable journals, use them to determine which rows changed in relevant tables.
  • Triggers: sometimes source table has triggers (i.e. Audit) which while do not provide explicit UpdatedOn field nonetheless can be used to determine row update time.
  • Hashing: hash incoming data and save the hash, update the row only if hash changed.

If you can modify source database - add UpdatedOn field, it's the best6 solution.

Linked tables allow you not to store data permanently, but the cube would be rebuilt each time. With other approaches syncing cube is enough.

Also check this guide on DeepSee Data Connectors.

I have no real answer to the logon issue

What about custom login pages?

<html>
    <head>
        <title>cUSTOM LOGIN PAGE</title>
    </head>
    <body>
        <div style="">
            <form name="loginForm" class="form-signin" method="post" action="#("./index.csp")#">
                <p id="caption">Registration system</p>
                <input name="CacheLogin" value="1" type="hidden">
                <input id="CacheUserName" type="text" class="input-block-level" name="CacheUserName" placeholder="Login" value="_SYSTEM">
                <input type="password" class="input-block-level" name="CachePassword" placeholder="Password" value="SYS">
                <button class="btn btn-small btn-primary" type="submit" style="font-size: 1em;">Login</button>
            </form>
        </div>
    </body>
</html>

1. Export Api package.

2. Uncompile and Delete all classes from Api package regardless of the case.

3. Delete package with

write $System.OBJ.DeletePackage("package name") 

4. Check if there's anything related to jitPod.Api package left in:

zw ^oddPKG

Delete ONLY entries related to your package.

5. In exported code replace Api with api for all classes. 

6. Restart Cache.

7. Import classes.