Crawler PAM, Asset Management and Order
Our software experts developed the OFFICE ASSET Crawler in order to agree with the requirements of secure and regular data transmission. The crawler collects data in the customer network via SCCM or MS Graph and transmit it to OFFICE ASSET without using a permanent interface to the company’s network.
The crawler can also be used for other modules:
- PAM – It searches your network for existing printers fully automatically and adds them to OFFICE ASSET. In addition, it regularly reads out all relevant data from the printer (e.g. printed pages, levels of printer supplies, error messages and alert conditions). Based on this data, further processes, e.g. fully automated ordering of required supplies, can be initiated in OFFICE ASSET.
- Asset Management – By using SCCM or the MS Graph API, the company’s existing assets are detected and their inventory data were imported into OFFICE ASSET.
- Order – For using the Modul Order the crawler can trigger PowerShell scripts in customer networks. With this function an employee can order permissions or software installation via individualized eShop, which are automatically executed.
Crawl-E
The crawler for the home office: The Crawl-E was developed specifically for use in home offices to read the print data of the externally used company printers. It offers the following advantages:
- Readout of individual printers by IP or hostname
- Fully preconfigured
- Quickly and easily ready for use
Information for system administrators: In order for the user to download the already pre-configured version of the Crawl-E in OFFICE ASSET, it must have been previously installed by a system administrator. Here the current Crawl-E version can be downloaded.
Discontinued versions
Crawler-Versions | Note | End of support |
up to Version 7 | Update required | The support has ended |
Versions 7 – 9 | Update recommended | Support will end 2022 |
Versions Summary of the most important changes
Version 18
18.0
- The GUI now displays the URL where the crawler can be reached.
- Bugfix: Paramaters from the frontend were not escaped. For example, it was not possible to use a + in a password, otherwise connection errors occurred.
- Bugfix: In a Docker container it was not possible to connect to the crawler service using HTTPS.
Version 17
17.0
- Change in the determination of the local IP.
- The crawler now also reports the data source that its engine identifies.
- Finer distinction in the OID test.
- The crawler can now perform SNMPV3 tests.
- Bugfix: HP printers can now also be read out via V3 if they were configured via the web interface of the printer.
- The CrawlE now also crawls the network via SNMP to list printers.
- Bugfix: The CrawlE had synchronisation problems of the GUI with the printers.
- The CrawlE now waits for unreachable printers within a crawl.
Version 16
- Bugfix: If a proxy was used, it could not be removed.
- The configuration page of the crawler is now also accessible via HTTPS.
- The connection test of the crawler now also updates the URL field.
- Bugfix: The SNMP test of the crawler could not handle special characters in the community string.
Version 15
- Bugfix: When changes are made to the crawler interface, the H2 database is reset.
- Bugfix: The crawler can now handle article scripts that get special characters in parameters.
- The CrawlE installation file is now smaller.
15.1
- If the system language is not known to CrawlE, English is used as default.
- The CrawlE GUI can now be refreshed to output information about the identified printers.
- When requesting the LOG file, the LOG file of the GUI and the printers is now also provided.
15.0
- Now also IPv6 scans possible
- The CrawlE runs in a service
- CrawlE now has a user friendly GUI for selecting printers
- The GUI of the CrawlE has its own LOG file
- Dependencies are now located next to the *.jar file and are no longer unpacked in it.
Version 13
- A Docker image can now be created for the crawler.
- If an SSL handshake does not work, a suggested solution is now written to the log for the customer.
Version 12
- SNMP test now displays the result as information
- Logging improved (less unnecessary stack traces)
- In the logs is now written ~ every hour an entry, so it is clear when the crawler was started and when not
12.0
- Contact now leads directly to the SiBit homepage.
Version 11
- Java changes for Linux crawlers.
11.2
- The language of the GUI is now also specified via the setup.
11.1
- Minor bug fixes to the GUI.
11.0
- The crawler now has a web interface. This enables the service to be operated from other computers.
- The old GUI has been removed.
Version 10
- The MS Graph interface now supplies the system owner’s e-mail address with the assets so that the associated person can be (alternatively) identified.
10.0
- First version of the Crawl-E.
- No changes to the crawler.
Version 9
- Now includes and uses Java 11.0.8 + 10 (Zulu 11.41.23) from Azul Systems.
- The crawler can now execute PowerShell scripts, which are set via the OA interface.
9.2
- The crawler now sends the local IP and the host name of the computer on which it is installed.
9.1.1
- Bugfix: If the zone is changed in the support tab, all zone-specific information is removed from the Crawler H2 database (crawl configurations, scan configurations, etc.).
9.1
- The manual start of the crawl timer has been removed.
- The crawl timer will now be restarted automatically after it has ended.
- Various minor improvements to the Java Deamons.
9.0
- The crawler can now communicate with the Microsoft Graph API.
Archive
8.0
- The crawler can now perform SCCM queries.
- Known Issue: The translations of the crawler are not working properly. Fixed in version 9.0.
Version 7
7.6
- Minor bug fixes.
7.5.1
- The password query for the support tab of the crawler now also works on MAC OS.
7.5
- Bugfix: A scan could not be saved in the H2 database if the result value had more than 255 characters. maximum characters were increased to 2500 and a log was written if this should occur again.
7.4
- The REST server now listens to localhost by default, but can be set using the “networkInterface” setting in the configuration file.
7.3
- A crawl is now also carried out if there is no crawl configuration. This is required to transfer results from local printers if necessary.
7.2
- Now outputs messages for OS that have no GUI, if root rights are missing for the .sh scripts.
- Various code adjustments to conventions.
7.1
- Support and setup for Linux operating systems.
- Bugfix: Fixed some spelling errors in the translations.
- Bugfix: The user interface tries to load the password for REST communication with the service for 10 seconds. It is then assumed that the service has not started. (Only relevant for the first start of the crawler).
- Bugfix: The terminal user interface (for example for Linux OS) now outputs unknown errors.
7.0
- RMI exchanged for REST (Jersey V 2.25.1) (communication GUI & lt; – & gt; Service).
- REST uses a basic authentication with password (see Client.config) and an optional user name.
- The crawler can now accept results from a local USBPrintReader (also via REST) and send them to the OA in the crawl interval.
- GUI and service dependency management switched to Maven.
- Hibernate update from 4.1.9 to 4.3.11 final (no major update).
- H2 driver update from 1.3.176 to 1.4.196 (1.4.197 contains critical bugs).
- The crawler now empties its crawl configurations when the OA does not provide any more.
- Whether the PamService is running is now also checked via REST and no longer via a VB script.
- The distinction between whether the service is unavailable or not even running has been removed.
- Mainly serves as preparation for the Unix crawler.
- Bugfix: Deleting and recreating the result tables was never triggered.
3.x – 6.x This version has been discontinued and is no longer supported! We strongly advise against using this version. Security mechanisms are not sufficient by today’s standards.