Craw­ler PAM, Asset Manage­ment and Order

Our soft­ware experts deve­lo­ped the OFFICE ASSET Craw­ler in order to agree with the requi­re­ments of secu­re and regu­lar data trans­mis­si­on. The craw­ler coll­ects data in the cus­to­mer net­work via SCCM or MS Graph and trans­mit it to OFFICE ASSET wit­hout using a per­ma­nent inter­face to the company’s network. 

The craw­ler can also be used for other modules:

  • PAM – It sear­ches your net­work for exis­ting prin­ters ful­ly auto­ma­ti­cal­ly and adds them to OFFICE ASSET. In addi­ti­on, it regu­lar­ly reads out all rele­vant data from the prin­ter (e.g. prin­ted pages, levels of prin­ter sup­pli­es, error mes­sa­ges and alert con­di­ti­ons). Based on this data, fur­ther pro­ces­ses, e.g. ful­ly auto­ma­ted orde­ring of requi­red sup­pli­es, can be initia­ted in OFFICE ASSET. 
  • Asset Manage­ment – By using SCCM or the MS Graph API, the company’s exis­ting assets are detec­ted and their inven­to­ry data were impor­ted into OFFICE ASSET. 
  • Order – For using the Modul Order the craw­ler can trig­ger PowerS­hell scripts in cus­to­mer net­works. With this func­tion an employee can order per­mis­si­ons or soft­ware instal­la­ti­on via indi­vi­dua­li­zed eShop, which are auto­matically­ exe­cu­ted.

Craw­ler-Down­loads

Craw­ler-Ver­si­on: 18.0 OFFICE ASSET – Ver­si­on: 9.0      The cur­rent craw­ler as Docker

Down­loads for older version

 

Crawl-E

The craw­ler for the home office: The Crawl-E was deve­lo­ped spe­ci­fi­cal­ly for use in home offices to read the print data of the extern­al­ly used com­pa­ny prin­ters. It offers the fol­lo­wing advantages:

  • Rea­dout of indi­vi­du­al prin­ters by IP or hostname
  • Ful­ly preconfigured
  • Quick­ly and easi­ly rea­dy for use

Infor­ma­ti­on for sys­tem admi­nis­tra­tors: In order for the user to down­load the alre­a­dy pre-con­fi­gu­red ver­si­on of the Crawl-E in OFFICE ASSET, it must have been pre­vious­ly instal­led by a sys­tem admi­nis­tra­tor. Here the cur­rent Crawl-E ver­si­on can be down­loa­ded.    

Dis­con­ti­nued versions

 

Craw­ler-Ver­si­onsNoteEnd of support
up to Ver­si­on 7Update requi­redThe sup­port has ended
Ver­si­ons 7 – 9Update recom­men­dedSup­port will end 2022

Versions Summary of the most important changes

Ver­si­on 18

18.0

  • The GUI now dis­plays the URL whe­re the craw­ler can be reached.
  • Bug­fix: Para­ma­ters from the front­end were not escaped. For exam­p­le, it was not pos­si­ble to use a + in a pass­word, other­wi­se con­nec­tion errors occurred.
  • Bug­fix: In a Docker con­tai­ner it was not pos­si­ble to con­nect to the craw­ler ser­vice using HTTPS.

Ver­si­on 17

17.0

  • Chan­ge in the deter­mi­na­ti­on of the local IP.
  • The craw­ler now also reports the data source that its engi­ne identifies.
  • Finer distinc­tion in the OID test.
  • The craw­ler can now per­form SNMPV3 tests.
  • Bug­fix: HP prin­ters can now also be read out via V3 if they were con­fi­gu­red via the web inter­face of the printer.
  • The Craw­lE now also crawls the net­work via SNMP to list printers.
  • Bug­fix: The Craw­lE had syn­chro­ni­sa­ti­on pro­blems of the GUI with the printers.
  • The Craw­lE now waits for unre­acha­ble prin­ters within a crawl.

Ver­si­on 16

16.0
  • Bug­fix: If a pro­xy was used, it could not be removed.
  • The con­fi­gu­ra­ti­on page of the craw­ler is now also acces­si­ble via HTTPS.
  • The con­nec­tion test of the craw­ler now also updates the URL field.
  • Bug­fix: The SNMP test of the craw­ler could not hand­le spe­cial cha­rac­ters in the com­mu­ni­ty string.

Ver­si­on 15

15.2
  • Bug­fix: When chan­ges are made to the craw­ler inter­face, the H2 data­ba­se is reset.
  • Bug­fix: The craw­ler can now hand­le artic­le scripts that get spe­cial cha­rac­ters in parameters.
  • The Craw­lE instal­la­ti­on file is now smaller.

15.1

  • If the sys­tem lan­guage is not known to Craw­lE, Eng­lish is used as default.
  • The Craw­lE GUI can now be refres­hed to out­put infor­ma­ti­on about the iden­ti­fied printers.
  • When reques­t­ing the LOG file, the LOG file of the GUI and the prin­ters is now also provided.

15.0

  • Now also IPv6 scans possible
  • The Craw­lE runs in a service
  • Craw­lE now has a user fri­end­ly GUI for sel­ec­ting printers
  • The GUI of the Craw­lE has its own LOG file
  • Depen­den­ci­es are now loca­ted next to the *.jar file and are no lon­ger unpa­cked in it.

Ver­si­on 13

13.0
  • A Docker image can now be crea­ted for the crawler.
  • If an SSL hand­shake does not work, a sug­gested solu­ti­on is now writ­ten to the log for the customer.

Ver­si­on 12

12.1
  • SNMP test now dis­plays the result as information
  • Log­ging impro­ved (less unneces­sa­ry stack traces)
  • In the logs is now writ­ten ~ every hour an ent­ry, so it is clear when the craw­ler was star­ted and when not

12.0

  • Cont­act now leads direct­ly to the SiBit homepage.

Ver­si­on 11

11.3
  • Java chan­ges for Linux crawlers.

11.2

  • The lan­guage of the GUI is now also spe­ci­fied via the setup.

11.1

  • Minor bug fixes to the GUI.

11.0

  • The craw­ler now has a web inter­face. This enables the ser­vice to be ope­ra­ted from other computers.
  • The old GUI has been removed.

Ver­si­on 10

10.1
  • The MS Graph inter­face now sup­pli­es the sys­tem owner’s e-mail address with the assets so that the asso­cia­ted per­son can be (alter­na­tively) identified.

10.0

  • First ver­si­on of the Crawl-E.
  • No chan­ges to the crawler.

Ver­si­on 9

9.3
  • Now includes and uses Java 11.0.8 + 10 (Zulu 11.41.23) from Azul Systems.
  • The craw­ler can now exe­cu­te Power­Shell scripts, which are set via the OA interface.

9.2

  • The craw­ler now sends the local IP and the host name of the com­pu­ter on which it is installed.

9.1.1

  • Bug­fix: If the zone is chan­ged in the sup­port tab, all zone-spe­ci­fic infor­ma­ti­on is remo­ved from the Craw­ler H2 data­ba­se (crawl con­fi­gu­ra­ti­ons, scan con­fi­gu­ra­ti­ons, etc.).

9.1

  • The manu­al start of the crawl timer has been removed.
  • The crawl timer will now be restar­ted auto­ma­ti­cal­ly after it has ended.
  • Various minor impro­ve­ments to the Java Deamons.

9.0

  • The craw­ler can now com­mu­ni­ca­te with the Micro­soft Graph API.

Archi­ve

Ver­si­on 8
8.0
  • The craw­ler can now per­form SCCM queries.
  • Known Issue: The trans­la­ti­ons of the craw­ler are not working pro­per­ly. Fixed in ver­si­on 9.0.

Ver­si­on 7 

7.6
  • Minor bug fixes.
7.5.1
  • The pass­word query for the sup­port tab of the craw­ler now also works on MAC OS.
7.5
  • Bug­fix: A scan could not be saved in the H2 data­ba­se if the result value had more than 255 cha­rac­ters. maxi­mum cha­rac­ters were increased to 2500 and a log was writ­ten if this should occur again.
7.4
  • The REST ser­ver now lis­tens to local­host by default, but can be set using the “net­work­In­ter­face” set­ting in the con­fi­gu­ra­ti­on file.
7.3
  • A crawl is now also car­ri­ed out if the­re is no crawl con­fi­gu­ra­ti­on. This is requi­red to trans­fer results from local prin­ters if necessary.
7.2
  • Now out­puts mes­sa­ges for OS that have no GUI, if root rights are miss­ing for the .sh scripts.
  • Various code adjus­t­ments to conventions.
7.1
  • Sup­port and set­up for Linux ope­ra­ting systems.
  • Bug­fix: Fixed some spel­ling errors in the translations.
  • Bug­fix: The user inter­face tri­es to load the pass­word for REST com­mu­ni­ca­ti­on with the ser­vice for 10 seconds. It is then assu­med that the ser­vice has not star­ted. (Only rele­vant for the first start of the crawler).
  • Bug­fix: The ter­mi­nal user inter­face (for exam­p­le for Linux OS) now out­puts unknown errors.
7.0
  • RMI exch­an­ged for REST (Jer­sey V 2.25.1) (com­mu­ni­ca­ti­on GUI & lt; – & gt; Service).
  • REST uses a basic authen­ti­ca­ti­on with pass­word (see Client.config) and an optio­nal user name.
  • The craw­ler can now accept results from a local USBPrint­Rea­der (also via REST) ​​and send them to the OA in the crawl interval.
  • GUI and ser­vice depen­den­cy manage­ment swit­ched to Maven.
  • Hiber­na­te update from 4.1.9 to 4.3.11 final (no major update).
  • H2 dri­ver update from 1.3.176 to 1.4.196 (1.4.197 con­ta­ins cri­ti­cal bugs).
  • The craw­ler now emp­ties its crawl con­fi­gu­ra­ti­ons when the OA does not pro­vi­de any more.
  • Whe­ther the Pam­Ser­vice is run­ning is now also che­cked via REST and no lon­ger via a VB script.
  • The distinc­tion bet­ween whe­ther the ser­vice is unavailable or not even run­ning has been removed.
  • Main­ly ser­ves as pre­pa­ra­ti­on for the Unix crawler.
  • Bug­fix: Dele­ting and recrea­ting the result tables was never triggered.

  3.x – 6.x This ver­si­on has been dis­con­tin­ued and is no lon­ger sup­port­ed! We stron­gly advi­se against using this ver­si­on. Secu­ri­ty mecha­nisms are not suf­fi­ci­ent by today’s standards. 

 

Logo-Jobs
Logo-Newsletter
Mitglied im Händlerbund fair-commerce
Icon-LinkedIn Icon-Xing Icon-Instagramm Icon-Facebook Icon-YouTube