Module 7 - TrustSource Administration

Module 7 - Tools & Tooling / TrustSource

 

  • Goals: Provide basics of Tooling and TrustSource Administration
  • Contents:
    Open Source Compliance Capability map. matching open source tools, TrustSource Architecture, TrustSource Services, Users, LDAP & Identity integration, MFA, inviting external users, managing roles and rights, general settings, handling API and API keys, throtteling and usage limitations, configuring environment specific settings, using multi-entity setup, information about system status, contacting sup- port, stay tuned
  • Target Groups: TrustSource administrators


Module 3 - Open Source Governance

Module 3 - Governance

Open Source Governance is key to achieve Open Source Compliance.

  • Goals:
    Understand the challenges of achieving and maintaining compliance across large organisations and how TrustSource may help
  • Contents:
    understand the OpenChain Specification and OC Self certification procedu- re, Challenges of OS Compliance in large organisations, Goals of Governance, OS Governance Board and OS Governor, OS Policy ingredients, Policy distribution, TrustSource support for OS policy distribution, Inbound governance, Outbound Go- vernance, Committer Governance, Working contracts and OS Governance / work time vs sparetime, International law and work contracts in the context of OS gover- nance, managing OS and external workforces , Summary & test

Target group: compliance managers, CISOs, business unit respsonsibles


Module 8 - Recent Developments

Module 8 - Recent Developments

Open Source is on the move – tools, legal requirements and business models are constantly changing. This module gives an overview of the current developments of the last 12 months

  • Current case law and new legal requirements
  • New and updated licensing conditions
  • Overview of new tools, features and functions
  • Duration: approx. 60 minutes

Target group: developers, project managers, product owners, compliance managers, architects


Open Source Compliance in the context of containers

Modern software development, especially micro-service architectures, is hardly imaginable without container technologies such as docker. The promises of ease and Flexibility through containers prove true. But what does this flexibility mean for open source compliance?

It’s wonderful: Just take an image (FROM instruction) from the Docker hub with e.g. a Linux Alpine – a very lightweight Linux distribution – as a base, then add some source files from your own repository (COPY) and build the target application with Maven (RUN). Quickly add a web server with apk add package (RUN) and adding some configuration (another COPY). The Java service is ready for delivery. Thanks to Docker!

It seems to be very little: A Docker file with 6-7 lines, plus a handful of own classes, maybe 300-500 lines of code. Nevertheless, in total, half a million lines of code are created. 99.9% of it is open source, additionally pulled from the net when building the image. Irrespective of any security concerns, this is how many of the services used today are created, especially in the context of current micro-service architectures. 

March 11, 2020In KnowledgeBy Jan Thielscher

A considerable share of almost all services is based on Open Source

This description is not intended to diminish the performance of service developers, nor is it generally applicable. However, it shows roughly the relation in which open source has spread in today’s software development. A significant part of the efficiency gain in software production is based on the free availability of infrastructure solutions as open source. 

Irrespective of this, however, new challenges also arise with this clout. In the commercial environment in particular, the use of open source must be well documented, not only for security reasons. There are also legal reasons:

Whenever an author’s work is used by a third party, the user must secure the rights of use. This is done by an agreement – even tacitly – between the author (copyright holder) and the user. This is usually done by the author placing his code under a “license”. In the context of Open Source, these licenses usually grant the rights for commercial or even scientific use, modification and distribution open source or binary, etc.. There are hundreds of these licenses, some more open, some more restrictive, see the OSI license list or the official SPDX list.

Know the conditions of Open Source

Very important for the users of Open Source is, however, that many of the comparatively open licenses also impose conditions for their validity! If these conditions are not fulfilled, the right of use is void! 

The following excerpt from the certainly well-known Apache 2.0 license, for example, clearly binds the right of distribution to the compliance with additional conditions. These conditions include, among others, the provision of the license text, the naming of authors or the retention of copyright information in the source code. In addition, these must be explained in a “notice file” if necessary.  If any changes have been made to the software, these must also be reported. If even one of these conditions is not fulfilled, the right to distribute the software expires! 

Containers add a special aspect to this already demanding list of requirements. With the help of containers, it is possible to deliver essential parts of the runtime environment preconfigured.  This extends the scope of the delivery considerably.  If a Zip, EAR or WAR file with its own source code has been delivered so far, a Linux with pre-installed Tomcat, Wildfly or other Open Source components can be delivered directly ready for use in the container. 

If not only the docker file – i.e. the recipe for the construction – is delivered, but the finished image, all infrastructure components become part of the delivery. Depending on the type of license, this can have different consequences, especially in the area of documentation, but also with regard to intellectual property. In any case, it would be critical if one of the license conditions is not fulfilled. Unauthorised commercial marketing is no longer a trivial offence.  

The majority of open source licenses require at least the naming of the author, if not the provision of the license together with the delivered solution. However, if you look at the images on Docker-Hub, you will find neither bills of material nor corresponding metadata, let alone license texts. Rarely enough, there is at least a hint or link to the terms and conditions on the Docker file/image. 

4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
  1. You must give any other recipients of the Work or Derivative Works a copy of this License; and
  2. You must cause any modified files to carry prominent notices stating that You changed the files; and
  3. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
  4. If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

Accepting documentation as a challenge

Woe betide the person who is now to carry out the compliance documentation for a docked product enriched across several value creation stages / partners! As described above, most different sources for the origin of the image finally used come into question. Especially critical are the manually added files. These can contain everything imaginable and elude virtually any form of analysis.

Furthermore, the author of an image has the possibility to copy something into the image from public or private sources, to load source code, to install and execute packages or software or even to build it. The latter is also possible with the help of package managers, which in turn resolve and obtain the necessary dependencies. In short: a docker image offers its author in each layer the opportunity to do all the dirty things he could do on a server as well.

Tools and support

Fortunately, there are several OpenSource tools that can help here. Not all of them were created for this purpose, but some of them can be used. There is tern on the one hand, and Clair on the other. 

Tern is developed by a small team at vmware to decompose docker images and collect the license information of the components contained in them. The focus is on analysis for legal purposes.

Clair was created with the focus on vulnerability analysis and has less focus on the legal issue. However, Clair also indexes the components of a container and could therefore serve as a supplier of BOM information. However, if only the legal aspect is in focus, Clair would be shooting at sparrows with cannons (see below).

tern

Tern can perform its analysis on the basis of a Docker file as well as a Docker image.  To do this, tern uses Docker, as well as the Linux “find” command in conjunction with the extended attributes of the Linux file system. In the newer Linux distributions the required attr-library is already available, in older Linux distributions it may have to be installed later.

Since tern activates the find command with the help of a simple bash script and analyzes the files on mounted host volumes, the Docker version of tern is unfortunately also currently not executable on Mac or Windows. However, it runs well on a Linux system. tern is developed with Python 3 and can also be installed via PIP.

As a result, tern prints a list of the components it has found in each layer of the image (see example).

A docker image creates a docker container. It is, so to speak, the building regulation according to which the Docker demon assembles the container. You can imagine the container construction analogous to a 3D print. Each command line of the docker file creates its own layer. These layers are separated by namespaces and therefore more independent than on a real system.

Thus an existing image can be loaded with the help of the FROM command. It comes with everything that its author gave it. What this is can only be determined exactly if This is one of the reasons why you should only get images from trustworthy sources or preferably build them from scratch!

Images should always come from trustworthy sources or be built from scratch!

Furthermore, the author of an image has the possibility to copy something into the image from public or private sources, to load source code, to install and execute packages or software or even to build it. The latter is also possible with the help of package managers, which in turn resolve and obtain the necessary dependencies. In short: a docker image offers its author in each layer the opportunity to do all the dirty things he could do on a server as well.

------------------------------------------------

Layer: 9a78064319:

info: Found 'Ubuntu 18.04.4 LTS' in /etc/os-release.

info: Layer created by commands: /bin/sh -c #(nop) ADD file:91a750fb184711fde03c9172f41e8a907ccbb1bfb904c2c3f4ef595fcddbc3a9 in / 

info: Retrieved by invoking listing in command_lib/base.yml

Packages found in Layer: adduser-3.116ubuntu1, apt-1.6.12, base-files-10.1ubuntu2.8, base-passwd-3.5.44, bash-4.4.18-2ubuntu1.2, bsdutils-1:2.31.1-0.4ubuntu3.5, bzip2-1.0.6-8.1ubuntu0.2, coreutils-8.28-1ubuntu1, dash-0.5.8-2.10, debconf-1.5.66ubuntu1, debianutils-4.8.4, diffutils-1:3.6-1, (…) , zlib1g-1:1.2.11.dfsg-0ubuntu2

Licenses found in Layer:  None

------------------------------------------------

Layer: dd59018c47:

(…)

If things go well, tern also supplies meta data such as licenses for the identified packages. The tool still has some difficulties with the analysis of chained commands (&&) in the docker file and – inevitably – the processing of the contents of COPY statements. To overcome the latter, the tern team has now created a way to integrate other tools – such as Scancode ((https://github.com/nexB/scancode-toolkit)) from nexB.

In addition, we are currently working on extending tern with an interface to TrustSource so that the results of the analysis can be transferred directly to TrustSOurce for further processing. The image would be a separate project, the layers found would be interpreted as TrustSource modules and the components found within a layer would be interpreted as components. This would make it possible to combine the found components with the TrustSource metadata and vulnerability information, thus integrating the container into a compliance process and automating the documentation as far as possible.

Clair

On the other hand there is the Clair, originally developed by Quay.io – now part of RedHat via coreos – in go-lang. As described above, Clair focuses on the identification of vulnerabilities in Docker Images. The heart of Clair is a database which is continuously fed with vulnerability information. 

This database can be accessed via an API and thus can be integrated with the container registry used in your own project. Thus, scans do not have to be done manually, but can be integrated into the CI/CD chain after image creation. 

In the course of the scan, a parts list of the image is generated – analogous to tern’s procedure – and transferred to Clair. There the components found are then compared with the collected vulnerability information. If necessary, Clair can trigger certain notifications to draw attention to identified vulnerabilities.

In principle, this is a sensible structure. Especially if there are a large number of image producers and buyers, it is highly recommended to explicitly check the images again for vulnerabilities.

However, the use of image-specific whitelists, as envisaged by Clair, is quite critical in this context, as vulnerability 1 is not dangerous in use case A and can therefore be placed on the whitelist, but could be dangerous in context B. Image-specific whitelists would therefore be rather counterproductive. As in our TrustSource solution, the whitelists must be context-specific.

Another limitation of Clair is the time of the scan. This only takes place after the image has been built and transferred to the repository. Actually, no image with vulnerabilities should be transferred to a repository! Errors should already be found in the CI/CD process and the image should not even be built, let alone stored.

For this reason Armin Coralic (@acoralic) has extracted a scanner version based on the work of the coreOS team, which can be integrated into the CI/CD process, preferring work, so to speak (see GitHub). In its current form, however, this version only displays the weaknesses found, not the entire structure, although it must be able to detect them. It is therefore advisable to output the structure list available in Clair directly. This would require appropriate adjustments to the Clair project.

TrustSource Platform

We generally recommend that all checks be drawn as far forward as possible in the software development process. This enables the developer to inquire about the status of a library or component or its possible vulnerabilities at the time of its installation. Should the component prove to be too old, too faulty or legally unsuitable, the developer saves time and unnecessary integration work. 

TrustSource uses the Check-API for this purpose. This enables developers to perform these checks from the command line. The TrustSource scanners already mentioned above (see here) can also cover this task automatically in the CI/CD pipeline for a large number of languages. This is already _after_ integration, but still _before_ further processing by others. 

TrustSource can also integrate runtime components as infrastructure modules in the respective project context and enrich them with meta data by means of crawlers that run independently in the background. This collection of information reduces the documentation effort and, by concentrating the information in one place, allows you to automatically monitor appropriately released compositions throughout the life cycle. Detailed information on the individual TrustSource services and how they support compliance tasks can be found at https://www.trustsource.io.

Conclusion

The preceding explanations show that by using container technologies such as Docker, in addition to the pure source code runtime components can quickly be included in the “distribution”. This increases the urgency of a precise, systematic software composition analysis. Although there are a handful of Open Source tools that make it possible to approach this challenge as well, they are still quite isolated and only little integrated into a comprehensive, life cycle-oriented process. 

With TrustSource, EACG offers a process-oriented documentation support that is designed for the life cycle of an application and integrates many tools for the different tasks. TrustSource itself is also available as open source in a community edition for Inhouse use, but also might be subscribed to as SaaS. 

EACG also offers consulting in the area of Open Source strategy as well as in the design of Open Source governance and compliance processes. We develop the TrustSource solution, which is also available as open source, and thus help companies to securely exploit the advantages of open source for their own value creation and to avert the associated risks. 


Scanners

This page will be populated shortly…

Meanwhile find the scanners here


Understanding the most important vulnerability acronyms

Since the Equifax event, vulnerability management gained a lot of attention. But what does "vulnerability" or "known vulnerability" mean? How to handle such an information? And why is this particular important to open source components?

To answer all these questions, we will publish a short series of articles. First we will dive into the goals of vulnerability management and basic concepts. Then we point our attention to the process of vulnerability management and finally show, how TrustSource may support you in performing these tasks.

The goal of vulnerability management

The goal of vulnerability management should be to MINIMIZE THE POSSIBLE ATTACK SURFACE of your environment, where "the environment" may be any scope of software you define (a SaaS, a software package or a complete enterprise). Minimizing the attack surface on the one hand means to accept that there will remain a risk. On the other hand, it means to know all possible attack vectors, assess the risk (maximal loss) associated and derive measures well suited to reduce the attack surface to a financially acceptable risk. This said, it is absolutely essential to assess your environment applying the most recent knowledge - known vulnerabilities - and to address all critical aspects.

The acronyms and concepts

This goal sounds ambitious. But no panic! tehre are some concepts out there helping you to do the job. Before we step into the process, we advise to familiarize with the following few abbreviations , terms and concepts:

  • CVSS = Common Vulnerability Scoring System

has been introduced to measure the impact of a vulnerability. Has a scale of 0-10, with 10 the highest, most critical. Everything above 7.5 may be considered as critical. CVSS is currently in v3, however, vulnerabilities reported prior to 2016 will have been reported in CVSS v2.

  • CAV = Common Attack Vector

describes the identified attack covering aspects such as prerequisites to execute the attack, impact and effect the attack will have. in v3 this will be attach vector (AV), attack complexity (AC), priviledges required (PR) and UI (User interaction). In version 2 (pictured in the tool tip on the right) you will see attack vector (AV), access complexity (AC), Authentication (Au), as well as the impacts on Confidentiality (C), Integrity(I) and Availability (A).

The standardized description of the attack vector is a great help when it comes to understand the impact a potential threat or a vulnerability may have.

  • CVE = Common Vulnerability and Exposure

The CVE is a key identifying a particular vulnerability. The key consists of the three letters CVE, the year and a counter. The counter is assigned by an assigning authority. The counter has no other meaning than to differentiate the particular vulnerability and exposure entries. It gets assigned in the moment it is requested. To request a number, no evidence is required. However, between assignment of an ID and its confirmation several weeks or months may pass.

  • CPE = Common Platform Enumeration

To provide a sound capability to match a vulnerability with the components concerned, the CPE has been introduced. Each component that has a vulnerability assigned, receives a CPE - currently following specification v2.3. A CPE is a unique identifier of a product allowing to refer back from the vulnerability to the product. The CPE (v2.3) consists of a type (h=hardware, o = Operating system, a = application), vendor, product and version information. A central directory, the CPE-dictionary contains all CPEs ever assigned.

However, the matching of CVEs with its assigned CPEs to real life components is critical. Wrong matches lead to false positives, putting the cat under the pigeons; missing matches leave vulnerabilities untreated. That is why we do spend so much attention on accuracy here.

  • CWE = Common Weakness Enumeration

This is a list of weaknesses found in applications. It is a community approach led by MITRE and SANS Institute, supported by huge number of technology heavy weights listing all kind of weaknesses, outlining their inner workings, exploit code and more details. CVEs may have a link to the corresponding weaknesses. the list is a great resource for security experts as well as wanna-be-hackers. It helps to understand the way attacks are created as well as what causes attacks to be successful.

The information supports the understanding of the impact a vulnerability really may have on the individual application.

So what?

Having read all this, you may want to turn your back at the topic and say, "Well, sounds good. Seems like all settled. Why bother?". Yes, there is a lot of structural work that has been done. But it these structures have only been created to allow you doing the job efficiently. The job still needs to be done. In our next post on vulnerabilities, we will cover the process on how to really assess a vulnerability and derive useful action.


How TrustSource supports OpenChain compliance

OpenChain in a nutshell

OpenChain is LinuxFoundation project with the goal to improve trust in open source software. To achieve this, OpenChain identified a set of requirements each organization should cope with to become a valuable and trustful open source user and producer.

„Open Source components delivered by a OpenChain certified organization you may trust!“

The OpenChain specification knows six goals. Four of them focus on the organization of Open Source usage inside the company while the fifth goal addresses the contributions to open source projects.

Finally the sixth goal requests to take responsibility and declare conformity with the OpenChain rules by getting the organization certified. The following will outline the goals and show how TrustSource supports you in achieving the goals.

G1: Know your responsibilities

In a first step it is relevant to know and understand tasks and obligations within your organization. This typically results in an Open Source Policy, a manual on how to handle open source software correctly. Such a policy describes roles and responsibilities and the processes and procedures how to deal with the different use cases.

TrustSource provides pre-defined standard policies and procedures for your purpose. You may want to use and adopt them for your own purposes.

OpenChain requires you not only to provide a policy but also to ensure that your staff is aware of that policy, so that it can be assumed the policy will take effect. To cope with that requirement, a documented procedure is required to proof that at minimum 85% of staff have knowledge of it.

Starting from v1.7 TrustSource comes with a learning management system including a set of online trainings and video materials, that allow to foster the spread of OpenChain behavior and knowledge including learning success metrics.

In addition it is required that procedures to identify and catalogue the applied open source components including the determination of the corresponding rights and obligations are defined.

TrustSource provides a wizard to document the project context which is essential to derive the obligations. Changes in the context, e.g. a changed commercial model will be documented and then new obligations may be determined.

G2: Assign Responsibilities

Unfortunately the current situation concerning documentation in the open source space leave room for improvement. Thus, it is most likely to that questions by downstream users may occur concerning options of usage or contained components. To answer these questions in a professional manner, the specification requires to assign a „FOSS-liaison“, that has to be publicly announced.
The specification requires you to standardize and organize the communication following a clearly defined set of rules. This is especially relevant in case of a legal dispute.

With TrustSource you may delegate this task. You will create a mail-alias and all incoming requests will be routed to our TrustSource help desk. They will be structured, documented and worked on based on a procedure defined together with you.

To cope with the requirements of this process, it is essential that sufficient legal expertise is available for this process. The expertise may be internal or external.

G3: Review and approve FOSS

The third goal focusses on the documentation of generated software respectively the used artifacts. A process how to create the bill of materials (BoM) should be qualified and documented. Where „qualified“ addresses that the process should be suitable to really identify all used components.
This should not happen once only. It shall happen on a continuous base. Especially in continuous deployment environments an extraordinary obligation on the actuality and the need to archive older versions of BoMs arises. A pure manual approach is almost not imaginable anymore.

TrustSource may supports this optimal. Due to the integration with build tools, you have the most recent information available after each build. Each single version may be saved and archived. The „freeze release“ mechanism allows to export specific versions via API or SPDX. Thus allowing you to to provide the most recent BoM with one click.

The specification does not define any requirements concern the contents of a BoM. But within the Linux Foundation the SPDX working group - Software Package Data Exchange - has created a specification and software as well as human readable format for the license documentation. It does not solve the problem in total but provides a sound basis for a technical documentation.
However, the provisioning of a BoM does not make a fully compliant treatment of FOSS. Depending on the use case the clauses of the meanwhile more than 396 known licenses may trigger different obligations. Mean of use, commercialization type or form of distribution and other parameters will impact the identification of the resulting obligations.
This is especially important due to the fact that some of the licenses will terminate the right of use if the obligations are not met. In other words: you will not have the right to use the component. Use of a component without right is a criminal offense.
Therefor OpenChain requires a legal examination, that discovers all obligations concerning the respective use case, to ensure a legally compliant application for that particular scenario.

Here TrustSource presents one of its dominant unique capabilities: Due to the knowledge of the conditions, rights and obligations of several hundred licenses and a structured capturing of the legal context, the TrustSource legal engine may resolve the conditions of the application completely automated and case specific. As a result a list with all required obligations will be provided to the project staff.

G4: Deliver FOSS artifacts

The forth goal aims at the delivery of the created compliance artifacts together with the Software in a single dispatch.

TrustSource supports this by providing an export of an SPDX document per module or project. In a next version a complete notice file will be generated and delivered. All this can be done within the application or through the API.

Another legal requirement is audit acceptability. The means older versions must be available until it is ensured that no old version will be in use anymore. This is required to retain the ability of answering questions concerning older versions based on a solid documentation.

TrustSource stores the complete history of information and thus allows to invoke data of older versions anytime.

On this basis it is always possible to identify which components habe been used in what module and under which circumstances (legal context). On the one hand documentation makes vulnerable, because it also documents what has not been made. On the other hand documentation also creates security, because it may be confirmed that all possible actions have been taken to achieve legal compliance. Such an approach is well suited to discover potentially missing aspects or actions.

G5: Understand Community

Open source should not be a one way street. You should not only take form the community, but understand that you are invited to support and return. This typically happens through participation in open source projects, so called „contributions“.
Depending on the engagement a contribution may reach from occasional code fixes up to leadership of projects. In some cases individuals contribute on occasional basis only, in other cases complete teams are dedicated on the development of specific modules or complete projects. Depending on its input the influence on the project grows. Some companies even create their own open source initiatives.
Whether occasionally or targeted, whatever form will be selected, the commitment itself needs to be clearly regulated. This comprises questions concerning the work time versus spare time design, claims or rights concerning the created software as well as the potential claims against the company arising from provisioning the contributions.
The fifth goal addresses this form of giving-back, the contributions to the open source community. It requires that a policy exists, that clarifies the handling of such contributions. As for the policy of open source use also this policy must not only exist but being distributed and well known.

To present the policy on open source contributions the same TrustSource mechanisms as for the support of G1 may be used. TrustSource also provides a sample policy as baseline that can be used.

It is possible that the policy you will choose prohibits contribution to open source projects. In this case only the prohibition needs to be propagated. Based on our experience, a strict ban is not a suitable answer. If you are using open source seriously, you must accept that you will come across errors. Not being Abel to repair them based on a policy forbidding the contribution would be negligent.
Such contributions also require a governance procedure. Ideally it does not differ much from the governance process for the own products. This will easy understanding as well as handling for all participants.

TrustSource provides a uniform platform for both use cases providing a broad set of tools for the process support.

G6: certify adherence with OC requirements

Goal number six requests the willingness of the organization to certify the adherence to the OpenChain goals one to five, respectively the according requirements. This is currently possible using a self audit. Th website of the openChain project offers a questionnaire and OpenChain partner organizations support you in implementing the required organizational changes.

TrustSource provides the most suitable platform to ensure process conformity. This is well known and understood with all involved parties and therefor will support all further clarifications (e.g. upcoming certifications, when OpenChain advances from specification to an official Standard)

Summary

In conclusion it can be said that most likely OpenChain conformity requires your organization to change processes and mindsets. This always is a time consuming effort, requiring dedication and focus. But an enterprise wide application of TrustSource will reduce a huge part of the complexity and efforts the transformation towards OpenChain conformity will cost.
We suggest to precisely define the starting scope. It might be a good idea not to start local instead global, focussing one entity first, creating a success story and transfer these experiences accordingly. Typically good stories sell well and the spread will ease.

This is approach also is supported by TrustSource. The Multi-Entity-feature allows to differentiate several entities or business areas within one account. It will be possible to manage them together but operate them on an isolated base. They may have specific policies or black- and whitelists, but operate one LDAP for example.

Even companies or business units that use already a scanning tool, TrustSource may leave the freedom to operate at their own. The option to transfer scanning results to the TrustSource API respectively to import SPDX-documents of single modules, allows to use TrustSource as the linking platform, to ensure a sound and conform legal and security analysis as well as harmonized delivery platform. Thus allowing to ensure process conformity and harmonized governance.


EACG and OpenChain agree on partnership

Frankfurt, June, 8th 2018, EACG - the mother company of TrustSource - and the Linux Foundation agree on a partnership to co-operate in the OpenChain project.

EACG acts for several years now in the field of open source governance and compliance. Based on the experiences from some larger projects, EACG has developed TrustSource , the platform for automation of open source governance.  "We are close to having all of the stuff automated. Even the legal part!", summaritzes Jan the efforts over the last few years.

"Our platform delivers the technical part: scanning, mapping, documentation and reports. But Governance is much more, that a tool may do. To really ensure compliant software delivery and distribution also processes and culture need to change. This is where OpenChain comes in. The many, well thought and carefully designed requirements will lead towards the required change, if managed carefully. We support that and provide all required features to ensure OpenChain compliance. "

EACG offer consulting services in the area of open source compliance and governance as well as the solution platform TrustSource. there are different editions available according to your needs. To check it out and test it here.


Why does a license matter?

“If someone is publishing his stuff on Github he must accept that it will be used by others!””

Unfortunately we still hear this critical misunderstanding often while finding open source components buried somewhere in source code; without any furtehr declaration of course. Let’s send a few words to discuss this in more detail.

In our western world protection of intellectual property is a high value. The believe that an inventor shall profit from his achievements has been accepted as the driving force of behind our wealth and developed status. That is why it has been protected by intellectual property laws. This insight counts some years already and meanwhile has been established and harmonized internationally through the Berner Convention.

Governing thought has been, that an inventor or creator of a work always will own all rights of usage, modification and all kinds of distribution. This is always valid for a certain period of time after the work has been created. Theperiod depends on the work.

An inventor or creator may transfer his rights to others. The typical form of this transfer is a license.

Without a license, all rights remain with the creator for his protection!

If no license exists, for the protection of the creator, all rights will be assumed as not transferred. Therefor each user of a component without license starts walking on ice. In general nothing might happen immediately. But who knows what will be in the future? Success might make jealous, motivations might change over time. Happy times for all of those, who own a license they may rely on!

But not only that there might be some contributors of open source software getting nasty. There is another relevant aspect of licenses. They also clarify the terms when the right to use is transferred. this will protect you from a usage without right.

In our hemisphere the usage of protected works without right is assumed a criminal act. This might not only cause immense financial damages due to call backs or branding impacts. But also a criminal investigation might be caused.  In some countries this does not even require a plaintiff. This role will be taken by the prosecutor automatically triggered by a suitable  evidence, irrelevant of the source (competition, former employee, original inventor).

To prevent all kinds of damage, it is highly recommended to ensure the availability of and conformity with a license!

To prevent damage, it is highly recommended to avoid using components without a license. But to achieve this, it is essential to know what has been used to build the software and what are the resulting obligations.

TrustSource has been developed to automate this task. Applying the automated scanning you may detect early which components are used and which licenses – or even no licenses – are related.

Our architects may help you to manage critical cases  or identify alternative solutions. Do not wait, start right now in creating transparency!