Craw­ler PAM, Asset Manage­ment and Order

Our soft­ware experts deve­lo­ped the OFFICE ASSET Craw­ler in order to agree with the requi­re­ments of secu­re and regu­lar data trans­mis­si­on. The craw­ler coll­ects data in the cus­to­mer net­work via SCCM or MS Graph and trans­mit it to OFFICE ASSET wit­hout using a per­ma­nent inter­face to the company’s network. 

The craw­ler can also be used for other modules:

  • PAM – It sear­ches your net­work for exis­ting prin­ters ful­ly auto­ma­ti­cal­ly and adds them to OFFICE ASSET. In addi­ti­on, it regu­lar­ly reads out all rele­vant data from the prin­ter (e.g. prin­ted pages, levels of prin­ter sup­pli­es, error mes­sa­ges and alert con­di­ti­ons). Based on this data, fur­ther pro­ces­ses, e.g. ful­ly auto­ma­ted orde­ring of requi­red sup­pli­es, can be initia­ted in OFFICE ASSET. 
  • Asset Manage­ment – By using SCCM or the MS Graph API, the company’s exis­ting assets are detec­ted and their inven­to­ry data were impor­ted into OFFICE ASSET. 
  • Order – For using the Modul Order the craw­ler can trig­ger PowerS­hell scripts in cus­to­mer net­works. With this func­tion an employee can order per­mis­si­ons or soft­ware instal­la­ti­on via indi­vi­dua­li­zed eShop, which are auto­matically­ exe­cu­ted.

Craw­ler-Down­loads

     Actu­al craw­ler as docker file

 

Crawl-E

The craw­ler for the home office: The Crawl-E was deve­lo­ped spe­ci­fi­cal­ly for use in home offices to read the print data of the extern­al­ly used com­pa­ny prin­ters. It offers the fol­lo­wing advantages:

  • Rea­dout of indi­vi­du­al prin­ters by IP or hostname
  • Ful­ly preconfigured
  • Quick­ly and easi­ly rea­dy for use

Infor­ma­ti­on for sys­tem admi­nis­tra­tors: In order for the user to down­load the alre­a­dy pre-con­fi­gu­red ver­si­on of the Crawl-E in OFFICE ASSET, it must have been pre­vious­ly instal­led by a sys­tem administrator.

Dis­con­ti­nued versions

 

Craw­ler-Ver­si­onsNoteEnd of support
up to Ver­si­on 7Update requi­redThe sup­port has ended
Ver­si­ons 7 – 9Update recom­men­dedSup­port will end 2022

Versions Summary of the most important changes

Craw­ler Ver­si­on 20.1/.2

  • The maxi­mum working memo­ry used by the craw­ler can now be set via the con­fi­gu­ra­ti­on interface.
  • The craw­ler now indi­ca­tes when a new ver­si­on is available.
  • You can now set up your own cer­ti­fi­ca­te for com­mu­ni­ca­ti­on with the con­fi­gu­ra­ti­on page.
  • After exe­cu­ting the set­up, the page for con­fi­gu­ring the craw­ler can be ope­ned directly.
  • Under­ly­ing libra­ri­es updated to impro­ve sta­bi­li­ty and security.

Craw­ler Ver­si­on 20

  • The craw­ler now makes it pos­si­ble to act as a pro­xy ser­ver for the prin­ters' web­sites so that they can also be acces­sed via a cloud ser­ver. (Beta function!)
  • Java Major ver­si­on updated to 17.
  • Craw­ler con­fi­gu­ra­ti­on inter­face updated and dis­play improved.

Craw­ler Ver­si­on 19

  • The MS-Graph API can now also read user groups.
  • The resul­ting URL may have been wrapped.
  • The Linux ver­si­on can now be exe­cu­ted with less pri­vi­le­ged rights.
  • Fixed a pro­blem whe­re an update may not have worked.

Craw­ler Ver­si­on 18

  • The GUI now dis­plays the URL under which the craw­ler can be reached.
  • In a Docker con­tai­ner, it was not pos­si­ble to con­nect to the craw­ler ser­vice using HTTPS.
Logo-Jobs
Logo-Newsletter
Mitglied im Händlerbund fair-commerce
Icon-LinkedIn Icon-Xing Icon-Instagramm Icon-Facebook Icon-YouTube