Using CDNs and external dependencies for internal web app development

CDN stands for Content Delivery Network, meaning a form of distributing content regionally, usually with the help of some DNS & IP logic.

from Wikipedia's entry on Content Delivery Networks:

" A content delivery network, or content distribution network (CDN), is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet,[1][2] even as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerceportals), live streaming media, on-demand streaming media, and social media sites."

CDNs and security

The usage of CDN services is very useful to the Internet crowd, allowing the faster delivery of website content. But this concept is not easily portable inside an Intranet, and this is the issue that we'll cover in this article.

As already coined in wikipedia;

"CDN providers profit either from direct fees paid by content providers using their network, or profit from the user analytics and tracking data collected as their scripts are being loaded onto customer's websites inside their browser origin. "

CDN services (and server-less request serving, same issues) are usually hosted in the Cloud, or on the Internet. But that's a matter of personal point of view and preferences. Problem with this, if you're developing an Intranet using web technologies, like everybody should have been doing for the past 20-25 years, on this particular year, 2020, you should start to experience some woes, thanks to the Chromium influence on this world (but that's another story).

Atypical problems arise when your Intranet is sitting on a local IP segment (192.168.*.*, 10.*.*.*, 172... etc..), and your content (your Intranet website) is being served on the local network. I know I know, who runs an Intranet in 2020 ? Well, the answer might surprise a lot of people. I suspect that any security conscious enterprise would want to run a pretty exclusive Intranet, with pretty stiff security on the perimeter. (If you're on my site then I also assume that you are security conscious.)

So, these past years we've seen the rise of CDN services, and a great boon it was to most web developers looking to accelerate their page rendering times, which for most us has been getting heavier and heavier with each year. So I wouldn't put it past a couple of wise web developers working on Intranet projects to also use CDN services located in a foreign/external cloud. Myself, I was about to do just that this morning, when I realized a number of security issues related to this practice.

First off, what is application security in 2020 ? We've seen the recent additions of new hacking practices added to our list of miscreants. Quick list; CPU manufacturer exploits, Source exploits (where miscreants insert their own 0-day code in popular libraries), War shipping (the sending of physical packages with remote exploit kits running on batteries hidden in the packaging) and of course, the infamous hard drive crypto ransom practices which on its own seems to have artificially affected the crypto-markets.

These details are somewhat important to our current analysis, because it seems on the surface that we are seeing somewhat of a shift from the old-school brute-forcing and spearheading (trojans by email) to these source-insertion mechanisms. And it makes total sense to me, because some software manufacturers have in effect increased their security and hardened a lot of systems. So, it makes sense that hackers would focus on easier exploit vectors, or at least, attempt to attack more efficient exploit vectors that allow them to readily distribute their wares to a wider audience. And therein lies one of the dangers of linking CDN content in an Intranet environment.

Little known fact about your browser today; it includes a pretty handy sub process that deals with network scanning and reporting that works exactly like nmap. Thanks to the likes of Netflix who couldn't engineer a better distribution mechanism than through the browser, the browsers have taken on a heavy-heavy burden (see bloatware). Bluetooth networking -from- the browser, second screen auto-magic detection and broadcasting -from- the browser, etc.. I'm scared, and as should you. I recently switched to AMD processors because I didn't like Intel's inclusion of WiFi and WiDi on their processors. I just hate the idea of having WiFi features available in my "secure" environments.

So, using CDNs to merge external content in an Intranet application is definitely a no-no for me now. I don't trust Google. I don't trust Microsoft. And I certainly don't trust mr. X's freemium concept of a CDN. (Ah ! yes, our browsers have recently turned into freemiums as well, its free... but it ain't secure in an Intranet environment if you're using the free version. Kudos to Microsoft for eating Google's vomit in their recent transition to Chromium.)

What I see in these "market strategies" being used by Google, Microsoft and their siblings, is a definitive freemium setup meant to lineup extra profit making with the membership-based costs for the ability to administer remotely the Security Settings in your employee's browsers. And this is particularly painful in 2020, as a lot of enterprises just started with work-from-home programs, a lot of "home" desktops must have recently been integrated in enterprise networks, reaching out through software-based IPSec tunnels running on the same "home" desktops. This makes the security administration of remote browser stacks very very difficult even for the most technically advanced system administrators.

Exploiting home users to get in the corporate data

This transfer of processing to the homes for rendering the simplest web pages that might depend on internal security measures has shaken up the security landscape, and I believe the industry in general is not seeing its implications. As this provides a considerable amount of "backdoors" for hackers to exploit in order to get into your corporate network. At this point it doesn't really matter if your corporate network is in the cloud or physically in your office; the home worker base connects to your corporate resources in exactly the same way, with the same tools and technologies and your corporate resources are a bit less protected because the cloud provider is managing what you think is a perfect security (while it is not.).

A particularly nasty exploit vector does exist in the browsers. Its called bloatware. What's worse is that I can easily point you to public documents clearly detailing the "novice" approach of browser developers, the breakneck designs in their stacks and alas, the lack of industry experience that these developers have, and all in the name of "Open Source". See ? I think Google and Microsoft have modified the meaning of Open Source, it remains "Free code" in general, but for that particular reason (ie; it not generating real profits), we should definitely expect them to deploy FUD campaigns meant to distract us from the real liberty of choices towards an Enterprise pricing model. And Google has been attempting it for the past 5 years, and Microsoft never had to work hard for it.

Alas, this year marked a turning point for Microsoft with their ripping out of the old browser stacks for a switch towards Chromium. Even if they have been making this transition for over 2 years now (with their dual-stacks of MS-Chromium under Edge), the final step of moving everything to Chromium occurred less than 3 months ago. Since then, Chromium (heavily financed by Google) became the defacto browser for all the Windows clients, Surface and Chromebooks. By the last statistics I've seen, that's a whooping 80%+ of the "by default" browser market right there.

So what are the odds that some exploits be found in Chromium which affects a way too big network of devices ? Very very very good.

Microsoft is renowned for exploitable code, and Chromium should be recognized for its bloat, lack of experience and inventing its own protocoles. And therein lies the danger on client workstations. As a system administrator, I've been raging and fighting for the past 10 years against the gradual tendency of ciphering everything internally. Software providers have been integrating cryptography left and right to patch exploitable vectors, and this has the nasty side-effect of further hiding, or at least, making it extremely more complicated for system administrators to monitor what is going on. By inventing their own protocoles, Google is pushing the cadence of radical changes to a pace which the industry cannot follow, and that might be the reason Microsoft abandoned their own Web-rendering stack. Security scanning can't recognize these new protocoles, open source projects cannot quickly integrate the new protocoles (their RFCs being stuck in Draftlandia, developers usually wait for market-demand of a proposed standard to implement it.), most open source projects don't have the additional resources to work in the additional specifications, and thus, in my perception of things Google is now responsible for confusing and creating havoc on the open source communities in general. Because they do have a monopolistic control over the Browser platform in 2020. It cannot be legally debated that Chrome is not Google, the debate should be moot. As most of Chrome's R&D is being conducted by unpaid students in Google's summer of code competitions. Google would argue that the monopole doesn't exist because its open-source, but I argue that the organizers of these coding competitions are exploiting programming slaves, and for sure, have their agenda prepared in consort with Google engineers before the events.

If I were an anti-trust case prosecutor, I would consult with people like me. ;) I did watch the latest anti-trust inquiry against Apple, Google, Facebook and Amazon. And I was a bit nonplussed by the lack of Microsoft on that table personally. Never was this Google-Microsoft browser cartel even considered in their inquiries. Meanwhile Google is fudding the terrain by marketing Google Chrome as different from Microsoft Chrome, while in the developer communities, we can see some announcements here and there saying that Google is no longer the sole financial support of Chromium, it seems Microsoft would share that burden now as well. BUT, I haven't found any document online from Microsoft saying anything like that, yet. Reality is that Chromium is built to be a Freemium browser, each brand having an extra layer belonging to them, and using their own cloud services. Its only internally, at the rendering layer that they share the same engine.

Danger! Danger! Robinson!

If Chromium is found to be very exploitable, and it will, the current political environment surrounding Chromium's development would make the patching very difficult. What that tells me, from experience, is that these exploits will be kept very secret for the widespread effect they would have on the Internet population at large. And people that invest in finding exploits don't always do it for the bug bounty rewards... its a lot more profitable to sell these exploits to organized groups that know how to use them for a profit. So these type of exploit vectors are normally not made public for a very long time. Consider the heartbleed and ghost exploits. They were with us for many years before anybody made noise about them. How can your enterprise be on the ready against these exploits? Its a practical impossibility.

One of those exploit vectors, which I see as related to virtual execution (which is another illusory security) is the browser's security handling of page processes. Browsers generally attempt to sandbox a page's rendering and execution in order to protect against cross-chatter between different opened pages. Well, that security is very flaky. Extremely flaky. Because all one needs to bypass it really is a memory resident program (see; a web filtering process) that tricks the browser by passing as a content rendering plugin. Many techniques exist to achieve this, and I'm constantly fighting against them personally in my development, very often finding rogue running processes in my browser console after surfing nothing more than Google search.

Ads can access that space, ill-intentioned web pages can access it as well, all that is needed is for a user to open that page while relatively unprotected. And therein lies another gotcha; if browsers were really capable of sandboxing web pages, then your anti-virus wouldn't be able to access it to analyse it. And that is actually a bug that Chromium introduced in 2017 and which lasted quite a while, without any public fanfare. But even if we assume that web pages can be properly sandboxed, a sandboxed process still can use the network services, crypto libraries, bluetooth services, camera, microphone and so on. It would require a pretty indecent effort on the part of the Chromium developers to harden their API stack against "the world" of possibilities.

The risks for corporate clients

It becomes relatively easier nowadays to infect home users in order to obtain the keys to the corporate environment. Again, if your corporate network is on the cloud, you might be surprised to find out that there are more exploit vectors to consider. The principal one that nobody can defend against being the colocated programs of another client that can access your supposedly private space. (Sandbox problem again).

Really, we need to readjust the time on what a Sandbox really is. Because that's what a Virtual Environment attempts to virtualize. A sandbox could be compared to a child's sandbox, you can put your child in the sandbox and tell him to stay there, but will he ? The moment you turn your back on him, he'll reach for your gardening tools and bring them in his sandbox. At the other extreme, parents should remain humane with their children and not have to cage them in a 3-inch thick iron plated box. Same as children, sandboxed applications require a minimum of latitude to breath and act. And that's pretty much how Microsoft built their Hyper-V infrastructure in the first place; many sandboxes with access to a common library of tools. (That was their way of cheating with their performance, for the bloat in their OS, otherwise they ended up replicating too much and that directly affected the performance greatly). But essentially, from my experience in the OpenBSD kernel, I can easily picture the amount of work that still needs to be done for the bigger kernels. OpenBSD's kernel totals less than 45MB, and we're still working on proper sandboxing (with giganormous efforts). Nobody can properly sandbox even in 2020, short of a Physical Proxy&Sandbox setup on a network pipe using physical and dedicated hardware to the task. Even then, that hardware is also susceptible to exploits, which can destroy security expectations on paper.

Also, in my experience, and as the saying goes; "the best doctors make for the worst patients". Not necessarily targeted at security administrators though, they are the exception in general, but considering the attributed powers in a corporate environment, those directors, team leaders and analysts are usually the first ones to introduce insecure practices in what should be a more secure environment. The more burocratic a position is, the more lax its security becomes. Furthermore, the tendency to having to depend on additional analytic and corporate tools that come with their own bloatware exacerbates the issues greatly. Consider your accounting software that manages payroll, or your precious SOC reporting (if you're a public company). In my environment I've found its the little things that can wreak the greatest havoc.

Much of the same thing (and much worse!) happens on your employee's home computers. A kid goes surfing on the wrong web page, the employee that gets infected on a social network, or for just browsing to the wrong web page again, and so on. Hackers are capable of leaving running processes in browsers, even after the page is closed, to translate sideways infecting corporate processes from the background in other rendered pages. Users at home are also very susceptible to WiFi hacking, because as we know today, there doesn't exist a secure WiFi anymore. (WPA2 is cracked and WPA3 is not out yet.) And if a cell-phone network is involved... downgrades to 1G/2G/3G render 4G and 5G security worthless momentarily for hackers to wedge themselves in your connection. Enterprises can deploy all the security hardening tools they want, they cannot secure what doesn't belong to them, albeit the Internet itself. (The only real solution to this problem remains VPNs)

So we see how client workstations at home can become the next greatest target for exploitations, and how the Covid-19 crisis also exacerbates this phenomenon across wide spans of industries. A lot of companies were not ready for remote deployments. Especially Microsoft Windows. (I have yet to meet one consultant capable of designing a proper Microsoft network that is secure, much less with remote capabilities, even after 20 years and the numerous advancements that Microsoft made, the confusion they promote is a double edge sword against this type of deployment specifically.) Who's ever heard or implemented the Microsoft Software Routers? Who relies on Windows to secure a network perimeter? (Certainly not a security conscious administrator like you.) Imagine extending this technological stack to secure home workers now?

Usually, when hackers target home corporate users, it will be to gain access to the internal corporate structure. Could be as easy as using a keylogger on a strategic workstation, or as complicated as deploying a 0-day virus that works it way to the corporate servers. But as you can read this, I'm sure you understand the simplicity of these activities now.

Github, NPM and external dependencies

While we're on the subject of CDNs and importing external libraries, another situation that arises too much (not on my networks though, but I see it online, in forums, github stacks and all.), is the "blind" inclusion of external dependencies in web projects in general. A lot of people are doing that right now, and it's scary as batshit. When I need to use an external lib (BlueImp's image canvas for example), I take the time to download the code, READ THE CODE ENTIRELY, encapsulate his code in a class of my own with a simple class override (usually to maintain a uniform programming framework) and subscribe to his bug mailing list to keep track of bugs, exploits and major changes which might require me to redo the download/reading/encapsulation and qa-tests.

That means that any sane developer can't work with more than 2-3 external dependencies overall. I find it quite impossible to keep track of 20+ dependencies like this... just impossible. So what happens is, in the corporate world we would normally invest in security analysis and turn the radar on that code stack (right ?.... hihihi.) instead of investing in more developer time. I think we could argue on this theme for a long time, it sounds to me like a big divisive concept. Either crank up the scanning & detection or minimalize your dependencies.

In these past years a number of npm exploit events have occurred, and I find it strangely reminiscent of the cryptocoin thefts on digital exchanges. Once these events occur, there's simply no certainty as to who perpetrated those events. Was the library built for the purpose of providing a backdoor to some obscure hacker group? There must be at least 1 example for every million libraries out there. In computer security parlance, that 1 out of 1 million means it is >insecure<. The possibility exists, you just don't know when.

Client, Servers and the localisation of exploit events

I remember my years as a technical steward for my Philosophy Blue development team, in the hiring process I would typically inquire about the developer's understanding of concepts such as client-server programming, web request processing and so on. And I was really surprised to see that a concept such as client-server could be so misunderstood in general. I wouldn't be surprised in 2020 to witness the same general misunderstanding in the developer communities, since web requests and client-server communications have become somewhat even more complicated with the introduction of new platforms, crypto libraries, http streaming and peer to peer. (lest not delve in the matter of Onion routing)

So I'll attempt to clarify what it means to include exploitable code in these different contexts, and their effect on the other end of the communication stream.

Typically, we understand "exploits" in general as a server failure, while this is generally true in practice, in theory its another story. When hackers target a server, it's usually because they're after the data on that server (*** we'll need to clarify a bit here too), or the means of distributing an exploit to the users connecting to that server. Why would hackers do this? Well, because they're really after "everything" they can get their hands on. (which should be the first premise in any security analysis).

Hackers could be looking for wallet passwords, in which case they must infect a LOT of web users. And the means to achieving that would be to distribute their packages in very popular downloads. If we could intercept Google's distribution network, we could literally infect a big part of the world. Imagine a coordinated attack on Google, Microsoft and Apple ?

As a web developer, we work defensively firstly. Every input can be someone attempting to exploit us. Every request could be an exploit attempt. Every subscription could be fake with the goal of spamming. And every contact form is susceptible to a LOT of spam, of course. But this approach also leaves us thinking that "we" are on the defensive, always protecting "our" internal resources from external threats. And its a bit wrong to think like that. Given the above examples of exploiting web data transfers, we can understand that its not always about "us", but also about our end-users. So it'd be very easy for hackers to exploit our mode of thinking by making sure the exploits are hidden on the server and only revealed on the clients. (Who checksums their entire page renderings against their "secure" source stacks?). And there you have it. As a web developer, we must also think about our users. For some of us, the equilibrium between internal resources and external users can be lopsided to the millions of users. And we know, we've seen a number of events affecting social networks and their users, of course.

Is this the same as cross-site scripting and sql-injection vulnerabilities? Definitely not. Because in these situations the hackers are looking to infect the Content that you're distributing. Sure, they could gain access to your server through a (too simple) sql injection, but the end game would be to install a one-line of very strategic code that allows them to trojan your users or translate horizontally to your other servers.

So, that's one aspect of server exploits. This would lead some developers to think that client-sourced dependencies (where we transfer the responsibility of assembling potentially unsafe code to the client browser) are a better alternative to avoid a lot of those local server exploits. And these developers would be wrong. Because there always remain the possibility of hackers exploiting those external dependencies and you not noticing it at all, thus making the situation worse.

A good example of that construction philosophy is Facebook and most other social networks. They effectively provide a javascript client which initializes on the client to assemble its dependencies, thus avoiding the server load, and exploit vectors on the servers. Difference with these companies though is they can afford to assign a team of 10 developers to each of their dependencies for security analysis and maintenance. Where the concept gets dangerous therefore is when the enterprise uses this same construction model without the internal development resources necessary to maintain a vigilant eye on those dependencies. (See digital banking, online exchanges and a lot of the crypto-coin wallet projects). Again, a wise Intranet developer playing it à la Money Ball style would do what I do, only depend on what you can read, understand and analyse fully.

Implementing a methodology for Safeguarding an Internal software stack against External dependency injections

Ok Stéphane, all this is pretty scary, so what are the options? Well, for starters, we could rethink the Content Distribution Networks and import the concept in our own infrastructures. If your enterprise is comprised of many locations, you probably have inter-connections between your branches and offices using MPLS or VPNs (because you are security conscious). So, technically, it should be as easy as pie if you already have in place an internal DNS infrastructure as well to manage your routing. All that would be needed is;

  • httpd cache services on your gateways (or that could be separate on systems located on each remote network, and I would assume that 1GB of RAM dedicated to the cache partition wouldn't be taxing and be pretty snappy for rendering times. This is easily available on any whitebox VPN worth its money.)
  • a content distribution methodology from your "validated" source servers (I usually use an rsync script to push my websites to their different hosting locations, same thing really, we're rsyncing a specific site folder to many remote servers... whooo! technology!)
  • DNS resolving services accepting local zone definitions for the local network its serving. (Well, you really need an Authority server locally that provides an extra layer of DNS resolving services to your local DNS resolver for its local network segment. We can manage each network's local zonefiles centrally though and keep pushing them using rsync, easy as pie.)
  • probably a nice little report interface showing the timestamps on the different caches you maintain.
  • Start revising that Internal software stack to incorporate external dependencies in your own development process, or exclude them and code your own.

Sounds pretty easy doesn't it? I blame the Cloud FUD campaigns for corrupting so many young minds...

So, why would I go through the effort of writing this article and finish with a pseudo-proto-plan for an internal CDN setup ? Because quite simply, there doesn't exist in the industry such a solution. ALL the CDNs of this world (private or public) are built from and for cloud components. We generally fallback to (local) content-caching and proxy-relay processes to accelerate the distribution on internal networks. Bear in mind that the crux of this solution is not to develop an internal CDN, but rather to internalize external dependencies in a security-vetting process. In this optic, if you already have proxies and content-caching, then you should only worry about internalizing the external dependencies.

On paper enterprises that do that usually have literal server rooms in each branch location (or at least, part of a rack setup), because they'll be deploying Domain services, Telephony services, and managing their internal DNS systems, on its own hardware (dedicated or virtual, doesn't matter in our discussion). Plus the firewalls, routers and switches. In a situation such as this, considering internal CDN setups is quite moot. We only beef up the proxy-relay services to handle the web pages in a timely fashion.

The problem is more apparent when your branch offices desserve only 1 user. Enterprises will not deploy 5000$ worth of hardware & software to each remote location. We have a tendency to rely on a VPN and connect that user to the central internal systems. And I imagine a lot of people have been deploying Windows-based IPSec VPNs in 2020... (tsk tsk tsk!). And this also exacerbates the security issues, because at this point you're connecting a farm of privileged users to your internal network, and the end-point doing the VPN is the same that is susceptible to more exploits than any other. To top it up, you're stuck analyzing that traffic from a Microsoft endpoint using what I would consider flaky IPSec key management practices. The simple act of writing those logs in decrypted format can seriously bunk-out (crash?) a server under the load, with hardware corruption potentials and the potential loss of keying material.

Consider this little bugger for an instant, how does your analysis tools react to such a benign entry?
" sshd[80157]: Failed password for invalid user #include &lt;stddef from 54.37.158.218 port 59612 ssh2"

For that reason I normally justify a white-label VPN/gw/fw/router setup using a little Arm/AMD SoC machine. It converts into a one-time expense of about 500$ and rids us of so many potential vectors. Plus, with the proper amount of ethernet ports its easy to manage more than 1 VPN stream going to the same house, thus being capable of clearly separating data traffic from voice traffic. I think this solution beats any and all software products out there.

So, with this in mind, we deploy white-label VPN boxes that we can manage ourselves, deploy services unto, and remotely administer just like any other Unix server. Furthermore, it provides a very clear demarcation point in the home of your corporate users to justify and support auditing, monitoring and security scanning even. Plus the advantage of being capable of physically wiring the remote user to your network, making sure to encrypt the VPN traffic before it travels on any WiFi network. This effectively eliminates MITM potential attacks. Another benefit of such setups is the firewalling which is centrally managed using your corporate rules. With a proper homogeneous hardware deployment, all the processes deployed on those machines become very easy to manage.

The only crucial point of such a setup remains the fact the the gateway-firewall-vpn box cannot access its own vpned network segment (as this normally is a hack on the OSI network layers, and also why a software VPN doesn't really provide the required security mechanisms). So, to keep everything functioning in harmony, we also require a minimalistic service deployed on a public IP which allows us to inventory these firewall boxes, and while we're at it, monitor them. With just a proper TLS support on that REST service, we can avoid fraudulent requests because our internal methodology also includes the usage of public key cryptography in the identification process of the firewalls. Of course, we're assuming you have a central HQ network which has some public IPs available for running web services. Preferably with a clear and distinct DMZ zone. In my experience working with OpenBSD (ie; the real source of most networking stacks out there), by default a system contained within a central DMZ can still communicate with remote VPN boxes by using the remote box's internal gateway IP. This makes it a snap to monitor from our central DMZ all the remote gateways, and push code updates using rsync. It also forces us into building proper and secure infrastructure, by default.

If you're going to try this at home for the first time, or if this is your first foray in OpenBSD VPNs, I highly recommend allocating extra time for the RTFM sessions, and I recommend you focus on IKEv2 using SSL certificates with a local Certificate Authority. (See; ikectl, iked, iked.conf and ignore the examples with password authentication). I personally hacked my way through it in about 2 years part-time, and I was running IKEv1 for many years as well. So, I do have recipes and probably a how-to that I could concoct that surpasses most public examples out there. -to be continued I guess-.

Ok, Got my VPN setup, and my content caching setup. What next?

Now its time to focus our efforts on minimalizing our set of external dependencies. This requires a bit of research and strategic thinking. Typical Intranet systems, I assume, would be built from the Windows platform, thus using IIS and Microsoft APIs and languages. The important aspect is to consider what amount of code is yours (proprietary, made in-house) vs externally sourced (those libraries that you download and install as components in your programming).

If a service stack has been in function for many years, chances are that a number of dependencies can be entirely eliminated, most programming languages now offering an increased set of library functions. As you progress through your application stacks, you'll eventually end up with a couple of exceptions for which a decision needs to be made; internalize or replace. There's absolutely nothing wrong with turning public code into private code, some of it is out there for that very purpose. Whatever profits the author was making, he wasn't counting you in the first place. (well, if we consider BSD/MIT open source license as a clear-cut example, GPL is usually more open to the freemium pathway.)

So, we should move those externally-sourced dependencies to our Intranet environment by appropriating its code and integrating it in our own source distribution system. Ideally, this would go through security scrutiny in the hands of at least 2 developers. We see how this generates a different sort of problem at the industry level, so many enterprises, so many dependencies, such a creative process everywhere we look. So, my personal thinking is for computer security to progress in general, the software developers need a system that they can rely on to establish trust relationships with other developers that can complement their work, and save precious time (or improve the security discovery).

We could envision a trust building system based on PGP quite easily, but alas the PGP-communities lack a bit behind. Unfortunately, most of the crypto-trust development investment is currently pouring into crypto-coin development, and most of it is actually trustless, not trust-fomenting. A tiny gotcha of the current era.

Issue really is who to trust. If I trust Theo de Raadt for his firm dictator grasp of the OpenBSD kernel, it doesn't mean my neighbor should trust him as well. But if my neighbor trusts my judgment, he might trust whomever I trust indirectly, and thus, that's what PGP allows, quite uniquely. What we lack is a set of layered applications (like currently happens in the crypto-coin arena), building on top of the provided technological features; that is a trust accumulating and referencing system. Alas, a recent PGP oopsie! lately destroyed any hopes of seeing such a system happening. Someone needs to rewrite PGP now, at least the Public Key Server logic. Another story worth writing about I guess.

There's NPM which attempts to provide a catalog of exploits and auditable code methodologies; see https://docs.npmjs.com/auditing-package-dependencies-for-security-vulnerabilities. Issue with this methodology is that this doesn't cover the most important of exploits, those zero-days ones.

Another project in the same vein;  https://npms.io/about, attempts the same thing in a more automated fashion, but as they say, they haven't tackled the "personalities" vetting approach yet. (Good, because that was my idea! ;)

CDS.ca, the Canadian Digital Service agency putting forth open-source methodologies for handling continuous DevOps is based on the above npms.io system. At the date of this writing, I've analyzed their source code and different projects in their repository and it looks like a work in progress still. (Which might hit a big wall given the above considerations).

To be continued I guess...