univ-of-utah-marriott-library-apple / jctl

`jctl` uses `python-jamf` to select objects to create, delete, print and update. It allows performing Jamf Pro repetitive tasks quickly and provides options not available in the web GUI. It is similar to SQL statements, but far less complex.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pkgctl - 10 Minute "Analyzing Jamf Data" Stage

ThingInCorner opened this issue · comments

We recently started using jctl and pkgctl with a Jamf hosted server and cloud distribution point. The primary computer is running macOS Monetary 12.6, but I have seen the same behavior on a clean machine running macOS Ventura 13.01 both from corporate and home networks.

jctl works without issue, but pkgctl consistently takes around 10 minutes to get beyond the "Analyzing Jamf Data" stage.

Initially pkgctl was failing more often than not with 502 errors and I assumed the excessive run times were related, but the 502 errors seem to have sorted themselves out (I have made no changes that I'm aware of that would explain this). Now the command is consistently successful but it predictably takes just over 10 minutes to return the initial results. Once this step is complete, I'm able to make changes and updates that go through instantaneously. I do not see the same delay with any jctl commands that I have used.

Any combination of arguments that trigger the "Analyzing Jamf Data" stage generate the same behavior (pkgctl, pkgctl -c, pkgctl -g, pkgctl -u, pkgctl -n, pkgctl -r, pkgctl -i).

pkgctl -p updates the patch definitions quickly, but returns "No packages match patch software titles", which is unexpected but I'm uncertain if this is related to the same issue.

How many records do you have? pkgctl has to download all package, computer, group, policy, and patch policy data to find the relationships since it isn't built into the Jamf API. I'm guessing that's why it's taking so long. I've thought of caching the data, but pkgctl needs a major overhaul either way.

That's understandable. This is the number of records we currently have in each of those categories:

  • Packages: 35
  • Computers: 757
  • Computer Groups: 96
  • Policies: 68
  • Patch policies: 12

That's not enough to cause it to take 10 minutes. Does logging into your server and using the web interface seem slow or laggy? Is your server in the cloud? If not, how much RAM and CPU does your server have? It's running Tomcat, and I'm not an expert on performance, but I believe throwing more RAM at it makes it faster than throwing more CPU at it. When was the last time you rebooted? And when was the last time you flushed out your old data (one of the steps you're supposed to do before updating Jamf)?

I'm asking all this to try to get an idea if it's your server or pkgctl. At one point pkgctl was actually crashing my server because it was doing so many lookups so quickly and my server couldn't handle it. I had to increase the RAM usage. Maybe I need to document this.

Also, pkgctl badly needs a rewrite. I wrote it very quickly over a short period of days and I must've done a lot of stuff "wrong" because ever since I wrote it I can't figure out what I was thinking when I wrote it and I've had the hardest time getting back into it enough to fix its problems.

That said, I really need it. It is a thorn in my side that I can't use it daily. So it is high on my to-do list. Unfortunately, it's hard to work on pkgctl when python-jamf and jctl also aren't working right, which is where I've been putting my time lately.

We did just migrate from on-prem to a Jamf Cloud-hosted solution, both cloud server and distribution point, so we no longer have any control over the underlying hardware or functions. I have to assume the issue lies somewhere there since I haven't heard of any similar reports from on-prem users. We performed a pretty significant cleanup before migration so our server is probably the leanest it's ever been and everything else has been running very smoothly. I'd be curious to know if any other cloud-hosted users are having similar issues.

We're also recent jss_helper migrators. We were using jss_helper with our on-prem server prior, so I don't have any local experience to compare pkgctl to on our cloud server. For what it's worth, BIG-RAT's Prune iterates through everything pretty quickly, so there could be something worth looking at there.

I completely understand about priorities - I have several scripts that I swear were written by someone else, but no - it was me. I opened this issue as requested on Slack but I would like to stress that other than the significant delay, pkgctl is doing everything that I need it to do. I am more than happy to help in any way I can, but please don't feel any pressure to prioritize this issue over anything more significant. I very much appreciate the work you all do and the tools you make available to the community.

Hey @ThingInCorner, we recently reworked the API backend. Hopefully this is fixed. Can you test it again and let me know? I know it's been a long time...

@magnusviri Sorry for the delay, but the performance is greatly improved in the new version. Now consistently around 2 minutes for the initial run which I feel is reasonable, and everything else is working as expected. Thanks very much for your work on this.