Developing The Accelerated Immersive Learning Process

The META-OBJECTIVE or LARGE GOAL of this activity is developing our Accelerated Immersive Learning Process.

In order to develop this discipline … we need a more specific, practical project in which to develop the process. As part of this, we will want to be searching extensively, exhaustively for inexpensive demonstration or prototype-level projects and opportunities for involvement in development communities which are working in the area targeted by our Accelerated Immersive Learning Process. Of course, the exact content for any self-starting autodidact using a process like this would be adapted [by that autodidact] based on that individual’s prior experience and their particular learning requirements.

The more specific, more practical goal which we develop below is about providing a comprehensive, but not exhaustive, curriculum that will enable the learner to build a solid foundation in what it means to think as a polyglot as we develop a working background inRust-Lang, WebAssembly, Tauri, ROS2, and distributed systems. In a nutshell, we want to develop the foundational skills for building fault-tolerant real-time operating systems for robot swarms operating in adversarial environoments.

# Fundamentals (50 modules):

0: Before we EVEN start down the Rust-Lang portion of this program of study, it’s spend some time carefully reviewing the reasons WHY anyone would want to use Rust-Lang … let’s start by spending some more time reading the Linux Plumbers Conference blog and going over the discussions that have happened in the Rust MicroConferences that happen in conjuntion with the LPC … this push for Rust-Lang might be about more than just memory safety, but IS IT REALLY?? … Whether Rust-Lang gets any real traction will all come down to how well the interaction between the Linux kernel core structures and lifetime rules that are written in C can be mapped into Rust-Lang structures and lifetime rules … Linux kernel is not the FINAL or LAST test for Rust-Lang to take over low-level software, but it IS the first important test of whether Rust-Lang is more than a temporary, seemingly significant fad [like bitcoin] … maybe if we really dive into what the bigger objectives behind whatever seems to be driving Rust-Lang, we will come away from the Fundamental modules is this syllabus thinking that this course should really be much more heavily based on C-Lang.

Rust-Lang might be a horrible language for the same reason that C++ is a horrible language, ie for ENABLING substandard programmers to even have a chance to write code. As Linus Torvalds says, “C++ is a horrible language [because] … it’s much much easier [for substandard programmers] to generate total and utter crap, that actually gets used. Quite frankly, even if the choice of C were to do *nothing but keep the C++ programmers out, that in itself would be a huge reason to use C.*

Maybe it does not matter … we will look at Rust-Lang AS A BROWNFIELD LANGUAGE, strictly to see how it does C-lang type things … why is the Rust-Lang way better than the way C-Lang programmers would do it … our approach to learning Rust is actually a C-first approach, because the legacy C code will NEED TO BE operational … we will certainly get into plenty of legacy C code along the way, as we work through examples and ask our AI assistants to help us understand why something is done as it is in C-Lang.

SERIOUSLY exploring Rust-Lang is really more about learning new languages, polyglotism and why things like domain specific langauges matter … learning Rust is not about how Rust is going to replace C-Lang as the dominant force in low-level programming anytime in the foreseeable future.

The following list of domain-specific languages illustrates a wide range of applications, from web development and databases to hardware design, data analysis, infrastructure management, and more. Their relevance and usefulness may vary depending on the specific domain and the evolving technology landscape … but the key point illustrated by this short list, is that learning yet another new language is something we should expect … learning a compiled, memory-safe language like Rust might have a steeper learning curve, but being able to understand programming language theory has to be one of your meta-competencies. Hopefully, this listing of just ten different domain-specific languages will illustrate why we learn different programming langauges [as the need arises] in order to develop our proficiency in learning programming languages.

  1. SQL (Structured Query Language): Used for managing and manipulating relational databases.

  2. HTML (Hypertext Markup Language): The standard markup language for creating web pages and web applications.

  3. CSS (Cascading Style Sheets): Used for describing the presentation of a document written in HTML or XML.

  4. RegEx (Regular Expressions): A sequence of characters that define a search pattern, used for pattern matching and text processing.

  5. Markdown: A lightweight markup language used for formatting plain text documents, often used for documentation and web content, including social dev fora like GitHub Discussions.

  6. LaTeX: A document preparation system used for technical and scientific documentation, known for its high-quality typesetting.

  7. VHDL (VHSIC Hardware Description Language) and Verilog: Used for describing, designing, verifying digital systems and integrated circuits.

  8. YAML (YAML Ain’t Markup Language) or TOML (Tom’s Obvious Minimal Language): Human-readable data serialization formats used for configuration files and data exchange.

  9. Gherkin or [Rust-rspec]: Describing software behavior without detailing how that behavior is implemented in code, used in Behavior Driven Development (BDD) and executable specifications written in plain text

  10. GraphQL: A data query language developed by Facebook as an alternative to REST APIs and ad-hoc webservice architectures. GraphQL is a strongly typed runtime which allows clients to dictate what data is needed while defining the structure of the data required so that exactly the same structure of the data is returned from the server.

By solving the memory-challenge, Rust-Lang offers a seriously compelling advantage over C-Lang. Memory safety is going to remain one of the most significant concerns in programming and this has been recognized for some time and, thus Rust-Lang and other alternatives sprung into existence. The responsibility of ensuring memory safety in C largely falls on the C developer, but that will change with future C standards and automated dev tools. C-Lang developers can certainly exploit C development tools that aid in memory safety and there are scads of bright people working on changes to C [driven by memory safety issue] to the language/compiler and libraries.

Placing the responsibility on the developer is obviously far from being foolproof, BUT developing memory safe code is not impossible … for example, packaging several features into profilers can do much [although far from everything that Rust-Lang does] to enforce memory safety. Developers can follow certain practices [and set up their toolchains accordingly] to achieve memory safety, such as: a) nulling out pointers when freeing memory to avoid Use-After-Free bugs and double free bugs, b) performing bound checks to avoid Out-Of-Bounds (OOB) read and write vulnerabilities. c) avoiding recursion or using it within known limits to prevent Stack Exhaustion and Heap Exhaustion vulnerabilities. Although practices can help, they do not guarantee memory safety to the degree that Rust-Lang can.

The thing is, there’s a LOT of C code out there already and it’s not going away soon, partly because even high-level programming will not soon be replaced by conversational AI, there will always be important niches where C’s raw control and established ecoystem are irreplaceable for troubleshooting issues in low-level programming. It is probably worth going down the Rust-Lang path as a novice or journeyman, even if one does not start out with an aim for mastery.

It’s not really about the C vs Rust programming language argument … because we all need to be conversant in different languages and idioms … it’s the interactions with the larger POPULATION of developers writing code that matters. It is strongly recommended that one working in this area engage in literate programming to communicate ideas. One example of this idea of being enough of a polyglot to be able to communicate the dangerous gist of an idea to others is .NET Interactive and POLYGLOT notebookers which allow us to programmatically demonstrate ideas and use diagrams-as-code tools like Mermaid for sankey flow diagrams, gitgraphs, mind maps.

There are interesting and important conversations happening now in areas like machine learning, AI and large language models that really sort of call for one at least being able to follow [and ask useful clarifying questions in] a serious discussion happening in some corner of the larger polyglot ecosystem … when it comes to learning languages which is really about understanding how people who express ideas in those languages think, we will want to think about those ways for expeditiously ascending the Rust-Lang learning curve that make the most sense for us in IMMERSING ourselves in the language and getting up to speed as rapidly as possible.

1: Preliminaries: Basic Installation … start with the normally recommended rustup basic installation process and invoke rustc to compile and then run the “Hello, World!” example, which means that you need to ensure that you have everything ready to go. At first, you’re just going to get a start with Rust program anatomy, including things like comments or additional junk code you care to add, just to monkey around … the NECESSITY of monkeying around with the hello_world program can’t be stressed enough; after all, this is a just sandbox; so [read ahead to learn ahead of the class, do things when you don’t know what you’re doing, look over other examples upon Rust By Example] but, be sure to add stupid stuff and find different ways to break the simple program.You want to come away from this with some vague sense of how Rust checks formatting correctness at compile time, noting that Rust is an ahead-of-time compiled language and uses linters. You will especially want to notice and break the syntax of main() function with the body wrapped in {} and the formatted print.

Add different kinds of junk filler code … just to experimenting a bit with rustfmt tool for formatting Rust code according to style guidelines. But also take a quick overview look at the other Rust development tools such as rustfix and rust-clippy tool, with its collection of over 700 lints. You can master these later, but at first you want to RAPIDLY get a very high level overview of the rust-analyzer extension for VSCode as well as other Rust-related extensions for VSCode, such as CodeLLDB and by all means if you use VSCode, be SURE to ALSO work through the tutorial for using Rust within VSCode to start getting a sense of how you might eventually get a feel for how VSCode can be used for Rust development.

Be sure to compile and run trial example programs beyond just Hello, World!. Then experiment with Rust’s Cargo build system and package manager and skim over The Cargo Book. The POINT of getting way ahead of yourself in the Preliminary module is to go wild, explore how much you can learn in just one day … don’t worry about getting it down perfectly, you will not have really broken anything – we will be coming back to this material again and again … at first, just install Rust, compile and run the hello_world program, drink from the firehose and … BREAK THINGS.

2-10: Start scratching the surface of how Rust actually works by methodically going through each one of the exercises in rustlings the right way … because rustlings is going to be the best way for large audiences of Rust noobs to assuredly learn the most basic basics of fundamental programming in Rust. While you are going through the exercises, you will want to use The Rust Programming Language book, the most comprehensive and theoretically deep resource for exploring concepts. You will also want to rely upon Rust By Example for practical ideas for alternative examples to tie all of the concepts together so that you might firmly understand common programming concepts, keywords and syntax,variables and mutability, data types, functions, comments, control flow.

11-20: Resource Acquisition Is Initialization(RAII) ownership and moves, borrowing, lifetimes 21-30: Structs, enums and pattern matching

31-35: Error handling, Option, Result

36-40: Modules, crates, workspaces

41-45: Testing, debugging, documentation

46-50: Standard library, common collections

Systems Programming (40 modules):

VLSI, FPGA, and ASIC design are EXACTLY a big part of this curriculum, but they are important to the development of real-time fault-tolerant systems and communication meshes. Computer architectures are still rapidly evolving, eg witness the rise of HBM3 to get past the Von Nueumann bottleneck and place memory closer to processing units. Accordingly, we delve into different side trips into research topics in computer architectures to ensure that we provide a foundation for understanding the “bare metal” that Rust programs will be getting closer to in the future.

51-60: Memory layout, pointers, unsafe Rust

61-65: Concurrency, threads, sync primitives

66-70: Parallelism, rayon, crossbeam

71-75: FFI, linking to C code

76-80: Allocators, custom allocators

81-90: Performance, profiling, optimization

Embedded & Real-Time Systems (40 modules):

91-100: Embedded basics, no_std, memory-mapped registers

101-105: Interrupts, exceptions, fault handling

106-110: Device drivers, I/O

111-120: Real-time scheduling, RTOS concepts

121-125: Time handling, clocks, timers

126-130: Predictability, worst-case execution time

WebAssembly & Tauri (20 modules):

WebAssembly and Tauri are included to enable UI development and potential off-loading of computation.

131-135: WebAssembly basics, Rust to WASM

136-140: JavaScript interop, wasm-bindgen

141-145: Tauri fundamentals, project setup

146-150: UI development with Tauri

Robotics & ROS2 (30 modules):

The robotics portion covers essential concepts and ROS2 integration. When it comes to alternatives for swarm robotics, there are several frameworks and platforms to consider besides ROS2 … these other platforms are great for ideas, but the bottom line that it is going to be next to impossible to beat the health of the ROS2 comprehensive ecosystem, unless you want to move in the direction of extending Arduino or Raspberry Pi, which are also important for learning essential concepts from the ground up. Let’s take a look at some of these alternatives for learning about robotics operating systems and compare them to ROS2:

  1. MOOS (Mission Oriented Operating Suite):
    • MOOS is a cross-platform middleware for robotics research, particularly focused on autonomous marine vehicles.
    • It provides a publish-subscribe architecture for communication between processes and supports distributed computing.
    • Compared to ROS2, MOOS has a simpler architecture and is lightweight, making it suitable for resource-constrained systems.
    • However, ROS2 offers more extensive tools, libraries, and community support.
  2. Buzz:
  3. ARGoS (Autonomous Robots Go Swarming):
    • ARGoS is a multi-physics robot simulator designed for large heterogeneous robot swarms.
    • It supports multiple physics engines and allows for the simulation of various robots and environments.
    • ARGoS is primarily a simulation platform and does not provide a full robotics framework like ROS2.
    • However, it can be integrated with other frameworks, including ROS2, to leverage its swarm simulation capabilities.
  4. Swarm-sim:
    • Swarm-sim is an open-source simulation framework for swarm algorithms and multi-agent systems.
    • It provides a high-level API for defining agent behaviors and interactions, and supports various swarm algorithms out of the box.
    • Swarm-sim is focused on swarm algorithm research and education, rather than robot control and perception.
    • Compared to ROS2, Swarm-sim is a more specialized tool for studying swarm algorithms, while ROS2 offers a comprehensive framework for robot control and perception.
  5. [NVIDIA Isaac(https://developer.nvidia.com/isaac)]:
    • NVIDIA Isaac™ robotics platform includes a full suite of GPU-accelerated innovations in AI perception, manipulation, simulation, and software. .
    • It leverages NVIDIA’s GPU technology for high-performance computing and provides tools for perception, planning, and control.
    • Isaac supports swarm robotics through its multi-robot coordination and simulation capabilities.
    • Compared to ROS2, Isaac offers tight integration with NVIDIA hardware and advanced AI capabilities, but ROS2 has a larger user base and broader ecosystem.

151-160: Robotics fundamentals, kinematics, control

161-165: Sensors, actuators, interfacing

166-170: ROS2 architecture, nodes, topics

171-175: Navigation, path planning, obstacle avoidance

176-180: Computer vision, image processing

Distributed Systems And Wireless Networking (20 modules):

These distributed systems modules START to prepare us for the challenges of swarm robotics, such as coordination, resilience, and security.

181-186: Swarm Intelligence: This sort of transcends the research topic of distributed algorithms in computer science and relatively basic or fundamental topics like consensus theory or gossip theory and gossip protocol styles. It takes us into things like multi-agent systems, crowd simulation, complex system topics and complex adaptive systems made up of intelligent agents such the human social group-based endeavors of political parties, communities, geopolitical relations, organizations such as companies, institutions or criminal associations, resistance movements and leaderless resistance, clandestine cell system, intelligence tradecraft and war the categories of topics that fall under the heading “network science” or “social network analysis” including things like sociograms, sociomapping, sociometry, social dynamics, social contangion, swarm behavior and swarm intelligence, shoaling (staying together for social reasons) and schooling(swimming together in a coordinated manner), early attempts at modeling artificial life with relatively simple boids and slightly more complex self-propelled particles that interacted with one another to using a simuluation stack like Issac Sim for Robot Learning with a workflow orchestration platform that to scale your workloads across distributed environments to be able computationally to simulate interactions between some reasonably complex actors/machines.

187-190: Replication, sharding, fault tolerance: Together, Replication, sharding, and fault tolerance can help create distributed systems that are resilient, scalable, and adaptable which are essential properties for the intelligent coordination of swarm of robots operating in uncertain chaotic environments, cognitive radio networks dynamically managing spectrum, and secure battlespace digitization and communication network-centric warfare and intelligence gathering that must withstand adversarial actions. Implementing these concepts is certainly not as simple as understanding terminology; this usually involves tradeoffs in terms of complexity, overhead, and performance, but the concepts are key parts of the process creating robust and effective distributed systems in these domains.

Replication involves maintaining multiple copies of data or services across different nodes in a distributed system to improve reliability, fault-tolerance, or accessibility. In the intelligent coordination of swarm of robots, replication can be used to distribute sensing, computation, and actuation across multiple robots. This redundancy increases robustness and fault tolerance, as the swarm can continue functioning even if some individual robots fail. In cognitive radio networks, replication can be applied to critical control information and network state, ensuring this data is available across multiple nodes in case of failures. Replication is also important for distributed spectrum sensing and decision making. For battlespace digitization and communication network-centric warfare and intelligence gathering, replication of command and control messages across multiple paths can ensure delivery even if some links are disrupted. Replication of cryptographic keys and other security-critical data is also important for resilience.

Sharding is the software design pattern of partitioning a large dataset or workload across multiple nodes in a distributed system. Consistent hashing, such that when a hash table is resized, only n/m keys need to be remapped on average [where n is the number of keys and m is the number of slots] is the technique used in sharding to spread large loads across multiple smaller services and servers. In order to understand sharding design patterns, it’s essential to understand how resizing a hash table works, under consistent hashing, to use a hash function to compute an index or hash code, into an array of buckets or slots, from which the desired value can be found or stored in a manner that avoids hash collisions. In the intelligent coordination of swarm of robots, sharding can be used to divide a large task (like exploring a large area) into smaller subtasks that can be independently handled by different subsets of the swarm. This parallel processing can improve efficiency and speed. In cognitive radio networks, sharding of the radio spectrum into smaller chunks that can be independently sensed and accessed by different nodes can improve spectrum utilization and minimize interference. For battlespace digitization and communication network-centric warfare and intelligence gathering, sharding of the communication network into smaller, more manageable subnetworks can improve scalability and reduce the impact of localized failures or attacks.

Fault tolerance is the ability of a system to continue functioning properly in the event of a failure of some of its components. In the intelligent coordination of swarm of robots, fault tolerance is crucial because individual robots may fail due to mechanical issues, battery depletion, or hostile actions. The swarm as a whole should be able to adapt and continue its mission despite these failures. In cognitive radio networks, fault tolerance is important to maintain reliable communication in the presence of interference, jamming, or node failures. Techniques like spectrum sensing, dynamic spectrum access, and adaptive routing can help ensure the network remains operational. For secure battlespace digitization and communication network-centric warfare and intelligence gathering, fault tolerance is essential to maintain command and control even if some communication nodes are compromised or destroyed. Meshes and ad-hoc networks with redundant paths can provide this resilience.

191-195: Wireless networking, reliable message passing, standard protocols, SDR and cognitive radio, signal processing research

196-200: Cybersecurity: It’s probably impossible to map out the giant rabbithole that information security has become in the last three or four decades, but it’s probably not a bad idea to try to develop your own prioritized Top 10 List of some of the best blogs, repositories, newsletters, and other infosec resources. Spending a few days developing a prioritized list that one might come back to on a weekly basis is more than enough to immerse anyone in the truly hysterical topic of cybersecurity.

  1. GitHub Security Lab is extremely useful for the GitHub opensourcist user audience like me AND … in the very best opensourcist fashion of GitHub … if you are already an independent security researcher or cybersecurity professional or someone who’s already up to speed and even ahead of the game with respect ot everything on this list, you can join CodeQL Bug Bounty program to be rewarded for queries that have a positive impact on open source projects by codifying your security knowledge as an expressive, executable, and repeatable CodeQL query that can be run on many codebases. Others in the VERY BEST of the very best opensourcist fashion of GitHub vein of most forked repositories in network-security, most forked repositories in cybersecurity and most forked repositories in DevSecOps which include Meir Wahnon’s curated list of tools for incident response, Marek Šottl Ultimate DevSecOps library, Mobile Security Framework(MobSF)’s (automated, all-in-one mobile application (Android/iOS/Windows) for pen-testing, malware analysis and security assessment framework capable of performing static and dynamic analysis, Firezone’s open source WireGuard®-based zero-trust access platform with OIDC auth and identity sync, Scapy’s Python-based interactive packet manipulation program & library, HackerRepo.org by Omar Santos … and tens or hundreds of other repositories on Github that are better to explore … RATHER THAN WASTING TIME WITH THE RAMPART FEAR-MONGERING OF CYBERSECURITY BLOGSTERS and the depressingly incompetent NEWS MEDIA that quote them verbatium!

  2. Schneier on Security is … AFTER ALL THESE YEARS! … still easily the most level-headed, SANE and cerebrally PROFESSIONAL resource on information security. As a general rule, at least one a week or maybe more frequently everyone should read Schneier On Security. We all know that infosec is something that we should be more aware of … the reason that normal, well-adjusted people tend shy away from this topic is that, unfortunately, MOST OF, but not quite all of material on information security is like antivirus software being worse than viruses … people shy away from infosec blogs because they don’t want to catch infosec mindrot. EXCEPT FOR Schneier on Security … most of the OTHER material you will find will be ad-heavy content, which inherently needy and likely to be insecure or even quasi-malicious trackering malware, out there is mostly just out-and-out hysteria barrage coated in disgusting slop of stewed fear-mongering grease, ie “THIS new threat is something you need to be totally terrified of and you’re probably already fucked, but it’s too slippery for you to grasp, so get yourself some of our X or book me for a seminar immediately.

  3. OWASP (Open Web Application Security Project) is a nonprofit foundation with tens of thousands of members working to improve the security of software with community-led open source projects including code, documentation, and standards. Different materials might be particular useful to people trying to learn as much as possible as quickly as possible about security. OWASP provides a wealth of resources, including: Cheat Sheet Series Software Assurance Maturity Model

  4. SecLists.Org Security Mailing List Archive – everyone knows that the latest news and exploits are not found on any web site. The cutting edge in security research is and will continue to be the full disclosure mailing lists such as Bugtraq. SecLists provides web archives and RSS feeds. You can browse the individual lists or search them all using the Site Search box. A similar effort that uses a slightly different approach in gathering its intelligence is Packet Storm Security which provides around-the-clock information and tools in order to help mitigate both personal data and fiscal loss on a global scale. As new information surfaces, Packet Storm releases everything immediately through it’s RSS feeds, Twitter, and Facebook pingdump … so that the Redditors, Facebookers, X-sters and LinkedIn crowd and re-pingdump over and over and over to it’s audience, ie it’s just like total eclipse or all kinds of end-of-the-world shit, ie you will not be able to avoid it, even if you wanted to.

  5. Black Hat Conference is another computer security conference which provides security consulting, training, and briefings to hackers, corporations, and government agencies around the world. Black Hat is very similar to DEF CON, except that BlackHat is actually four years younger and has a different audience. BlackHat is typically scheduled prior to DEF CON with many attendees opting to go to both conferences or catch the tail of BlackHat and the start of DEF CON. BlackHat is probably perceived by the security industry as being the more mature, more corporate security conference whereas DEF CON is more informal, more fun for a younger, partying audience … although there’s an element of competition in the mgmt of these conferences, these two conferences really complement each other by targeting slightly different audiences. DEF CON is the oldest continuously running cybersecurity / hacker convention around, also one of the largest and not entirely one of the worst … although with its nerdware antics and contests like lockpicking, robotics-related contests, art, slogan, coffee wars, scavenger hunt, and Capture the Flag, it has become sort of a sociohistorical cliche of its nerdy recursive back-to-the-nerdy-kiddo future self.

  6. Exploit Database non-profit project that is maintained and provided as a public service by the OffSec training and pentesting company as one of its open source community projects. A similar effort is the SANS Internet Storm Center(ISC), part of the SANS Technology Institute.

  7. Krebs on Security covers a wide range of topics, from general security news and analysis to technical details on vulnerabilities, exploits, and defensive measures. Brian Krebs is probably as good, concise, well-written as journalism can get when it comes to reporting on cybersecurity. In a similar, independent bloggers who do a good job of writing about topics in this realm include Risky Business Group, Daniel Miessler’s Unsupervised Learning, Troy Hunt’s Blog and Graham Cluley.

  8. The Hacker News attracts 50 million readers annually; most followed B2B cybersecurity news outlet on all major social media platforms … so it might not be the first place you find about something, but THN will not allow any drama to go unexploited. In a similar vein, Threatpost prides itself on be referenced as an authoritative source on information security by the copypasters who right stuff for the leading newsholes, like New York Times, Wall Street Journal, MSNBC, USA Today and National Public Radio. Others in space worth mentioning include Naked Security (Sophos), Bleeping Computer, Dark Reading, Security Weekly and CSO Online

  9. Reddit r/netsec is a Redditor community-curated lowest-common-denominator aggregator, ie Reddit is an LCD aggregator because of the way that Redditors tend to not like or even downvote uncomfortable ideas or commentary … but it’s all content that you could have find elsewhere [looking at higher priority item on this list]. Reddit [or for that matter Facebook or Twitter or LinkedIn] are probably NOT where you find the latest or the greatest information; it’s not going to be an adventure like going to a conference in Vegas … BUT social media does give one a general sense about people interested in this topic are talking about. The content here is about what you would expect from Reddit reflects its population; Redditors tend to be much younger and are significantly more likely to be male than Facebook or Twitter or LinkedIn users, but the converstation tend to be about the same, LCD-wise. FWIW, the stuff you find on Facebook, Twitter or LinkedIn will GENERALLY be less useful, less current than Reddit … but when we least expect it, even blind pigs can occasionally shit acrorns.

  10. Microsoft Security Response Center is about as Microsoft as cybersecurity can get … and that means, it probably is essential content for Windows-centric cybersecurity enthusiasts. In a similar fashion, AWS Security Blog is about as AWS as cybersecurity can get … which means that it’s entirely skippable stuff for most, but probably is essential content for AWS-centric IT consultants. However it is worth noting the different approach of Google Project Zero, which is the team of top rate security analysts employed by Google who are tasked with performing vulnerability research on popular software like mobile operating systems, web browsers, and open source libraries in order to find, REPORT and PUBLICIZE zero-day vulnerabilities … as with everything that Google does SCALE gives Google advantages that nobody else has; in this case, they report and publicize zero-day vulnerabilities. It’s really about using Google scale to SHAME large software companies into fixing their software. But we should not forget that Microsoft, AWS, and Google are themselves massively huge software-driven trillion-dollar companies. It’s in their best interest to develop the best-in-class vulnerability research competency make sure that their systems are secure … WE NEED THEM TO BE RELIABLE and generally they are … we shouldn’t usually expect them to generally produce any PUBLIC content that’s all that disruptive or earth-shattering.

There are different terms and key words that are important in cybersurity for IoT, which is related to but it’s focus is necessarily different than general cybersecurity which is focused on humans, social engineering, identity theft and malevolent actors that tend to be financially motivated.

Encryption is always an infinitely beguiling topic to strategists; this will be true even after we understand the inevitability of its eventual vulnerability to things like quantum computing and communication and meta-algorithmic combinations of technological flanking strategies which exploit both quantum coherence and quantum decoherence. It will always be possible to hide things in plain sight or to use multiplexed [steganography] to convey information within riddles within enigmas within other message channels.

This is why security specialists study and debate topics like multi-factor authentication, authorization, digital signatures, cryptographic storagethe OLD ways will always keep getting figured out by the bad guys FASTER than the good guys can come up with NEW ways … so even though we are aware of things like glaring PGP deficiencies and why PGP should just die, there will always be a new scramble to move on to something slightly more secure, such as signing arbitrary data with your SSH keys … which in a meta-sense is likely continually changing your pas$word by a single letter or changing the locks on your house but still living in a way that attracts lock-picking intruders.