Take a Bit of a Byte

The concept of a byte and why it consists of 8 bits can be traced back to the early days of computing and the development of computer architectures. When computers were first being designed, a byte was established as the fundamental unit of storage.

At that time, engineers needed a unit of storage that was large enough to represent a wide range of characters and symbols used in computing, yet small enough to be efficiently processed by the computer’s hardware. They settled on a byte consisting of 8 bits.

The choice of 8 bits was influenced by a few factors. One important consideration was that 8 bits provided enough combinations to represent a sufficiently large character set. With 8 bits, there are 2^8 (or 256) possible combinations, which allowed for the representation of a variety of alphanumeric characters, symbols, and control codes.

Another factor was the hardware design of early computers. The use of binary digits (bits) as the fundamental building blocks of data storage and processing was a natural choice due to its simplicity and compatibility with electronic circuits. Binary digits were easy to handle and manipulate within the computer’s hardware architecture.

Additionally, the use of 8 bits aligned well with the hardware architecture of many early computer systems. The design of processors, memory, and other components was often based on byte-oriented operations, where data was accessed and processed in chunks of 8 bits. The alignment of storage and processing units with 8-bit boundaries allowed for efficient data handling and operations within the computer’s architecture.

Over time, the 8-bit byte became widely adopted and standardized across various computer systems and programming languages. It remains the most common unit of storage and data representation in modern computing, providing a balance between flexibility, efficiency, and compatibility.

So, the choice of 8 bits in a byte was a result of considerations related to character representation, hardware architecture, and compatibility, ultimately becoming a widely accepted standard in the field of computing.

Unleash the Speed Demons!

Bandwidth refers to the capacity of a network connection to transmit data. It is measured in bits per second (bps). Network bandwidth determines how quickly information can be sent and received over the internet. Just like a wider pipe allows water to flow more quickly, a larger bandwidth allows more data to flow through the network.

In the computer world, the basic unit of storage is a bit, which is a 1 or 0. Bandwidth is typically measured in megabits per second (Mbps), where 1 megabit equals 1 million bits.

Transfer speed, on the other hand, is measured in bytes per second (Bps), with a capital “B” to distinguish it from bits. Bytes are a collection of 8 bits and are used to represent characters in computer systems. Want more on this subject? Check out my post on it!

Ethernet cables, such as CAT-5e or better, have a rated bandwidth of 1Gbps (1 billion bits per second). If you connect your computer to a router using an Ethernet cable, the connection between them can transfer data at up to 1Gbps. However, the connection between your router and the internet service provider (ISP) may have different ratings.

Your ISP provides you with a certain amount of bandwidth that you can use to transfer data between your home and the ISP. This is not the same as the capacity of the connection itself. ISPs often have high-capacity connections to the internet backbone, but they share the bandwidth among their customers. This means that the bandwidth you get from your ISP is the capacity you are renting, not dedicated bandwidth to you alone.

If multiple customers in your area are using the internet simultaneously, the ISP may employ oversubscription. This means they allocate less bandwidth than the sum of what all customers are paying for, assuming not everyone will use their full bandwidth simultaneously. This can lead to network congestion and higher latency.

When comparing Ethernet and wireless connections, it depends on the specific scenario. Ethernet can provide faster speeds than wireless in some cases, as it typically supports 1Gbps or higher. However, modern Wi-Fi standards like 802.11ac can support speeds of up to 3.46Gbps. In your specific case, with a strong 802.11ac connection, wireless is faster than the Ethernet connection available on your router.

Keep in mind that the maximum bandwidth available to you is determined by your ISP. If your ISP provides 150Mbps of bandwidth, the maximum theoretical transfer speed would be around 18.75MBps (Megabytes per second). However, due to factors like network congestion and shared usage in your home, you can expect to achieve around 75% of that speed, approximately 14MBps.

It’s important to remember that bandwidth is shared among users in the same network, so as more people in your home use the internet simultaneously, the available bandwidth for each individual may decrease.

Artificial What?

Over the last 2 months, there’s been a single rising star on the horizon; ChatGPT by the company OpenAI.

For those that have not heard about ChatGPT, please share what corner of the world you are living in so I can join you there in blissful ignorance of reality! For everyone else, here’s my take on ChatGPT and what it is, including some playful banter I had with the tool.

In AI, or Artificial Intelligence, a model is used to drive the output of the AI. An AI model is the development of a set of machine learning algorithms that try to emulate logical decision-making based on the data used to train the model itself. This is the basis of any AI engine, and for ChatGPT, that is the actual AI model, ChatGPT. GPT, or Generative Pre-Trained Transformer, is a type of language model that is trained on a large amount of text data with the sole purpose of generating human-like text in response. The name ChatGPT is a derivative of this, you are chatting with the GPT model that OpenAI created.

What’s the purpose of ChatGPT? Simple, to mimic a human conversation using logical responses, kinda like having a chat session with Spock. But wait, there’s more! It has also been able to write computer program code, because what’s a more logical grouping of text than program code! Also, it can compose music, which really is a mathematical construct at its core, and can also write poems, stories, even a term paper, granted, it will sometimes be very obvious a human did not write the paper just by reading it. You will see that below.

Ok, now on to the fun, let’s have a conversation with ChatGPT. I will not edit ChatGPT’s responses so you can see that in some cases, it’s pretty damned good, but in others, you can see grammatical errors and what is called “AI hallucination”.

So I start with “in layman’s terms, explain what ChatGPT is and how it can make my life better”

ChatGPT is an AI-powered language model developed by OpenAI. It’s designed to generate human-like responses to text inputs. Essentially, it’s like having a very smart computer program that can understand and respond to natural language questions and commands.

In terms of how it can make your life better, ChatGPT can be used for a variety of tasks such as answering questions, providing information, and even generating creative writing. For example, you could use ChatGPT to help with your homework by asking it questions about a topic you’re studying, or you could use it to get quick answers to questions you have throughout the day. Additionally, ChatGPT can also be used for entertainment, such as generating jokes or writing short stories. By using ChatGPT, you can save time and effort by getting information and answers more quickly, and also have some fun along the way.

Ok, so that’s probably a pretty canned response, so I decided, let’s really test this by asking “Can you expand more on how ChatGPT works”.

Sure! ChatGPT is a type of AI model called a Transformer-based language model. These models are trained on massive amounts of text data and use that training to generate new text.

ChatGPT specifically is trained on a dataset of billions of words from the internet and is capable of understanding and producing human-like text. When you input a text prompt, the model uses its understanding of language patterns and relationships between words to generate a response.

The model is based on a neural network architecture, which is a type of machine learning algorithm that is designed to recognize patterns in data. The network is trained on the text data to identify patterns in language and generate responses based on those patterns.

Once the model is trained, you can interact with it by inputting a text prompt and receiving a text response. The model uses its understanding of language to generate a response that is relevant to the input prompt. The response is generated in real-time, and the model can generate multiple responses to the same prompt, depending on the specific context and the intent behind the prompt.

Overall, ChatGPT is designed to generate human-like text responses, which makes it a powerful tool for a wide range of applications, from answering questions and providing information, to creative writing and more.

Ok, so that’s actually pretty good. Of course, my brain can’t stop there, so I asked this, “What would be a funny satireirical title”. Notice the mispelling, I had a brain fart, but ran with it because it should know what I mean and not what I type, right? It’s response was

Here’s a satirical title for you: “The Politician’s Guide to Saying Absolutely Nothing of Substance: A Step-by-Step Handbook.”

I fixed the misspelling and resubmitted it, here’s the response.

Here’s a humorous and satirical title for you:

“The Misadventures of Mr. Politician: A Guide to Being Incompetent and Getting Away with It”

Ok, the thing has a sense of humor, kinda. More like ripped from the headlines and arranged in a way to fit a satirical title, but damned that’s funny! Let’s be a little more specific now, “what is a satirical title to a blog post explaining how ChatGPT works”

Here’s a satirical title for a blog post explaining ChatGPT: “How ChatGPT is Taking Over the World: A Guide to Understanding the Rise of the AI Overlords.”

Now we are getting somewhere that’s bordering on funny and scary all in one shot! One thing to note, I am entering my responses in a chat-like window, similar to Discord, so it’s taking all previous responses into context as it responds.

Let’s try this, “What is a short, funny way of saying that using the least amount of words”

Here’s a short, funny way to say that: “AI’s takeover: a funny guide.”

I almost used that as the title of this blog post to be honest, because damned that’s good! I also decided to have it describe itself, which I then edited and ran through Dall-E 2, producing the image provided at the top of this post.

Finally, and I think this is a good closing to the blog post, I asked ChatGPT “What would you say to people who are afraid that you will make writers obsolete”. Here’s a better response than what I could ever come up with.

As an AI language model, I have been designed to assist with various language tasks, including writing. While I can generate text quickly and accurately, I do not have the creativity, originality, and personal voice that comes with human writing.

In many cases, AI language models like myself can actually help writers by providing inspiration and helping with research and editing. However, it is important to note that AI models like myself should be used as tools, not replacements, for human writers.

Additionally, writing is not just about the words on the page, but also about the emotions, perspectives, and experiences that a writer brings to the table. These unique qualities cannot be replicated by AI, and therefore, human writers will always be necessary and valued in the field of writing.

So, to those who are afraid that AI will make writers obsolete, I would say that AI language models like myself have the potential to augment and enhance the work of human writers, rather than replace them.

Flight of the Penguin

When someone says Linux, most people will think of hackers, total geeks, and obscure software running somewhere. What would you say if you already have been around Linux and that it’s in more places than you think?

Are you reading this on an iPhone, iPad, or Windows machine? If not, then most likely you are using one of the many flavors of Linux. for instance, Android was once a fork of Linux, now is essentially another distribution of Linux. Yep, Android is Linux.

Ok, let’s take a step back, what is Linux? Linux is a Unix-like operating system that is open source, that is based on the Linux kernel, created by Linus Torvalds in late 1991. If you are not familiar with what a kernel is, don’t worry, what’s important for this discussion is that a guy in college decided one day to write his own operating system and here we are.

Roughly 90% of the Internet is run by Linux, in one shape or another, and Linux also runs on embedded devices, such as thermostats, fridges, smart tv’s, etc., as Android, Roku, WebOS, etc. So without knowing it, you are surrounded by Linux. Even NASA runs Linux, the same Linux that you have access to, the same Linux I am writing this on now!

So what is this distribution thing I keep talking about? A Linux distribution, often called a distro for short, is an operating system (OS) that is a collection of the Linux kernel and a package management system to install and maintain software. Unlike Windows, you don’t have to go out and find individual downloads for everything, usually it’s in the distro’s package management system. A package manager or package management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs for a computer’s operating system in a consistent manner. Windows does have a package management system called chocolaty, but it’s not used as much as it could be.

So how many distro’s are out there? Well, since anyone can make a distro and distribute it, there is an unknown amount of distros, but there is a finite amount of well-known distributions, one’s that I would say can be trusted to be used. Some distros even have derivative child distros, that can go several generations deep.

https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg is a real good visual of this family tree, but for the short version there is four major ancestors of most Linux distros; Red Hat, Android, Slackware, and Debian. These four distros have by far the largest family tree of derivative distros.

So, why would one decide to use Linux instead of Windows? Well, Linux, for the most part, is Free, Open Source Software (FOSS), which means you do not pay to use it, ever. So you can run it on any hardware you want without worrying about paying crazy license fees or worrying about licensing the OS.

Second, Windows has become the enemy of itself. Because of it’s large use in business and home PC’s, it’s the most popular target for those that wish to do you harm, take your information or money, or just cause chaos. The other reason for this is with Windows being Closed Source, there’s is very few people looking for a fixing bugs, so things get through the cracks all the time in a major way, and when they do fix things, they often break other things. 2020 has not been a good year for Windows users and updates.

What’s this Open Source, Closed Source thing I’m talking about? Well, the core of any program, operating system, or kernel is source code. This is the programming language that makes the system function. In Open Source, the source code is open to the public to view, and one can ask to contribute to fixing code and adding new features. In this model, the source code of the kernel and OS is looked at by hundreds of thousands of eyes, making bugs easier to find.

With Closed Source, it’s just the opposite. Only a few people at Microsoft have access to the code to find and fix bugs, and introduce new features. This is an incredible amount of stress and things will be missed because deadlines must be met so that users will keep paying Microsoft. Here’s a deeper dive on the subject of Open/Closed Source, https://en.wikipedia.org/wiki/Comparison_of_open-source_and_closed-source_software

One thing to note is that as time goes on, more and more Linux features makes it way into Windows, even to the point of being able to run an entire Linux distro in Windows natively using Windows subsystem for Linux (WSL).

What distro am I running? I have used Ubuntu for several years and like the ease that it offers, and since it is one of the most popular distros, there’s a lot of software and support for it, including direct partnership from Microsoft for software. Ubuntu, by the way, is a derivative distro of Debian. There are a ton of other distros based on Ubuntu itself, each with it’s own little pros and cons.

I am running Ubuntu on a couple of laptops, just about every server, and some embedded devices (Raspberry PI), and do just about everything I need to do on it. I don’t game mind you, that is still in the realm of Windows and Android for now, but for everything else, then I am covered. It’s like having ChromeOS (which is based on Linux) without the limitations. Ubuntu isn’t any more difficult than Windows to use, in some cases easier and others not so much, it’s more about being different and taking some getting used to.

I would say that if you want to try out Linux, you can grab a USB flash drive and run a LiveUSB version of the OS, which means you can run it without ever installing it. There’s ton’s of instructions out there on how to do that.

The last thing I will touch on is updates to the OS under Linux. Unlike Windows, it rarely requires a reboot! Yep, keeping your system up to date without having it randomly rebooting on you is a reality, just not under Windows. I have had my laptop running for about 3 months now with now issues and no reboots. Yes, 3 months straight, no reboots or out of date stuff.

I encourage everyone to take a look at Linux as an alternative OS to Windows. It’s fun, easy, and in most cases more secure than Windows hands down.

What’s Up G

Today’s tech news is dripping with 5G news, about this hardware or that hardware being delayed and who is getting 5G first in their handset, etc.

Well, let’s break some of this down and dig a little deeper, which then I bet you will agree that 5G is really not that big of a deal for the majority of mobile users.

So, what is 5G? Well, it’s the 5th Generation of wireless standards, as ratified by the ITU-R. Wait a second, the what ratified by who? The International Telecommunication Union, or ITU, which is based in Geneva, Switzerland, and has been responsible for telecommunications standards since 1934, but not to be confused with the International Telegraph Union (ITU) of 1865, which is the historical parent of the modern ITU.

What about that R? Well, ITU-R stands for the ITU Radiocommunication Sector, which is the modern revision of the International Radio Consultative Committee created in 1927. The ITU-R is a Specialized Agency within the United Nations that is responsible for managing radio spectrum and wireless standards globally. They even have a hand in assigning satellite orbits to companies and countries.

So now that that is out of the way, back to 5G. Currently, we run on a wireless standard known as 4G LTE, or 4th Generation wireless, Long-Term Evolution. Remember the “old” 3G? That was 3rd Generation wireless, so we’re going into the second generation beyond that technology that fueled the mobility revolution.

So, why 5G? Well, innovation never stops, and the quest for bigger, faster, “better” is part of the human condition, which is where 5G comes from, and in the past 4G LTE. It’s meant to solve a problem that most folks don’t even realize they have to be honest. Backing up a second, there’s some marketing out there talking about 4G LTE-A, 4G LTE-Advanced, and 5Ge, which really isn’t confusing at all. Let’s poke around here before going deeper.

4G is actually a higher standard than 4G LTE, calling for higher transmission rates than the LTE marketing speak. Then we have this 4G LTE-A, 5Ge, and LTE-Advanced stuff, which, hold on to your butts, its just true 4G (not LTE crap). The marketing folks didn’t think that it would look “better” to drop the LTE for the “better” hardware, so they came up with those three very confusing terms. Thank you marketing wanks for that!

So, today every handset made uses the 4G LTE technology, with some adding support for the newer “true” 4G technologies. Coming to a store near you to lighten your bank account, 5G capable phones and tablets! But should you upgrade?

Well, starting to dive deeper, 4G LTE is the current standard, and even newer 5G hardware will use 4G LTE towers for some time to come, possibly into 2030 or even 2035. So that iPhone 11 you got last year, or even the Android phone from 2 years ago will have a lot of life in them.

So what’s the difference between 4G LTE and 5G? Well, fundamentally it comes down to spectrum, or radio frequencies that each technology uses. This spectrum determines core items of each technology such as speed, number of users in a given area, distance between the tower and phone, and how easily it is to block or attenuate the signal.

Let’s start with frequency. 4G LTE uses the spectrum between 700 Mhz and 2.5 Ghz, which those who are keen eyed will spot one of the issues with 4G LTE right away. It tends to overlap with Wireless G protocols, which use the 2.4 GHz spectrum itself. Also, those who are really keen will recognize 2.4 GHz as the frequency at which a microwave oven cooks your food. So double whammy there buddy. Additionally, not every carrier is licensed to use all of the spectrum, which is broken up into 100 MHz wide bands that the carriers have to license from the FCC. So a carrier may only have a few bands in which to provide service to all of it’s customers in a given area, which can get really complicated really quick.

For example, Verizon uses the LTE bands of 2, 4, and 13, with the majority of hardware using band 13. This really limits how many folks can be on a given cell tower at a given time, with the average numbers touted as around 100 users per cell tower, per second. Remember, your phone isn’t transmitting or receiving 100% of the time, so that 100 users per second may translate out to 1,000 or 100,000 users in a time period. Granted, the more users wanting to get time on that tower means that a line, or queue starts to build up, and when a user is waiting for their turn in that line to come up, their browser just sits and spins, and this is for each request! Just opening a web page, with all the ads and all the scripting and cookies, may take dozens or hundreds of requests just to open that one new article up.

5G operates at much higher frequencies, in some cases. There’s actually two different spectrum for 5G, the sub-6 GHz FR1 spectrum and the millimeter-wave FR2 spectrum. The first operates between 450 MHz to 6 GHz, and the later operates on the 26, 28, 38, and 60 GHz bands. While the division of the spectrum is into 100 MHz bands for 4G LTE…

Let’s hold again at this layer and point out something about radio frequencies. The higher a frequency goes, the more power is required to generate and transmit that wave, and the more information can be carried. All great, but the higher the frequency, the shorter the range is and the easier it is to block, or attenuate the signal, so you trade speed for distance. So the FR2 spectrum, while able to send a lot of data, 100 GB/sec+, it’s distance will be much shorter than FR1 and objects such as trees, walls, and even rain will block it much easier than FR1 transmissions.

Quick fun fact, when you start to get into the 60 GHz+ frequencies of radio transmission, little things like Oxygen molecules can start to cause interference in the transmission, so don’t expect to see these coming anytime soon to a theater near you.

Oh, for the conspiracy junkies out there, no, FR2 frequencies, especially the 60GHz, does not cause spontaneous nuclear fission and rip Oxygen atoms out of the air, suffocating people. First, go back and learn biology, O2 is but one teeny fraction of the atmosphere we breathe, there’s more N2 and other byproducts. What is going on is the O2, CO2, H2O, etc in the air (Oxygen based molecules) attenuate a 60GHz signal, basically stripping it from the air, not vice versa. If that was the case, we would have spontaneous nuclear fission happening right now as we speak.

So, before going deeper, the FR2, or millimeter-wave bands will not really be practical for mobile usage in the near future, this will be only useful for Internet of Things (IoT) and wireless broadband to home. Instead of running a cable or fiber, 5G can be used to provide broadband service to residences and businesses, just not in rural areas.

Here’s a good graphic from Cablefree.net that breaks down the uses of 5G by bands. You’ll notice that for the most part, we will still be using the 700 MHz bands for mobile, so not going to get much advantage over 4G LTE there.

CableFree 5G Frequency Bands

In fact, 5G and 4G LTE networks share 4G LTE’s frequencies because 5G is not wholly and immediately replacing 4G LTE. A network operator can use frequency division duplex (FDD) and time division duplex (TDD) technologies to share spectrum. FDD is where different bands of frequency are used by users; one for uplink and the other for downlink. TDD is where one frequency band will be used for uplink and downlink, switching between the two from moment to moment. In the US, we use TDD, not FDD based technologies.

Ok, going deeper, one big difference between 4G LTE and 5G is how low the frequencies go. Remember a few years back how we had to get those special boxes on our TV’s for digital Over the Air (OTA) broadcasts? Well, that was because the FCC was preparing the 450MHz to 600MHz spectrum to be used for some advanced communications technology coming. Well, now we have that technology, it’s called 5G.

What this means is that while a 4G LTE tower can service customers doing low data items, making phone calls for instance, can extend several miles from a given cell tower, using this so called Low Band 5G, that coverage now becomes hundreds of miles around that same tower, with similar to slightly better data rates than the 4G LTE connection. This means that just by adding equipment to existing towers, in most cases, carriers will be able to extend their reach without adding new towers, and improving existing customers experiences. Granted, this will need a phone capable of low band 5G, but those are coming out in late 2020 and early 2021, and as carriers add the new base stations to the existing towers, this becomes a win-win.

The good news though is that you won’t have to decide between getting a 4G LTE capable phone or a low band 5G capable phone, because manufacturers are making modems for phones that are capable of using both at once, and carriers can use both technologies at once. So while you can be further away from a tower and make a call using 5G, you may not get a better experience at running PupG right off the bat, so as with everything, there’s a tradeoff.

In conclusion, the confusing thing about right now is all the marketing wank out there around 4G LTE, 5Ge, 4G, 4G LTE-A, etc, and the sheer expanse of what 5G encompasses. In the end, I wouldn’t even think of stressing right now about upgrading to a 5G capable handset unless you are already in the market for an upgrade, because it will be a few years before carriers in the US will get things upgraded to the point where it will make sense to those of us that don’t live in big cities and never travel outside of those big cities.

V for Virtual

                What is a VPN?  Well, like most things in IT it’s an acronym for Virtual Private Network.  So that makes it all clear now doesn’t it?  I’m sure if you’re here then no, that doesn’t do much to help.

                So let’s start with the Private Network part.  At home, you have multiple devices, all connected to your home network.  On this network, your devices all can talk to each other without any issue, and typically you don’t worry about that occurring.  Additionally, when someone not on your network tries to look at your traffic, your router/firewall will keep them out and prevent them from getting into your Private Network

                So, where does the Virtual part come in?  Well, a VPN creates a tunnel over the Internet between two Private Networks and encrypts all the traffic in that tunnel.  Even though you are now going over the Internet, since you have this tunnel, this Virtual Private Network, the data is heavily encrypted and secure from prying eyes.

                Why does this matter on may ask?  Well, a VPN requires a client and a server, and all traffic in-between these two endpoints are encrypted.  The servers job is to decrypt the traffic coming from the client, and encrypt the traffic going to the client.  This is where a VPN comes in to play.

                Companies such as NordVPN provide the VPN server that you rent from them, and the client is, well, your computer or phone.  What this does is create a VPN tunnel between your device and the VPN server.  When you surf the Internet, the data goes to the VPN server encrypted, then leaves from the server to the Internet as normal.  When it comes back, the server encrypts the data and sends it back.  This is how your information is safe from prying eyes on the Internet, sort of.

                When you are on public WiFi, the data is not generally encrypted, so your data is easily intercepted.  When you use a VPN is this instance, your data is encrypted over this public WiFi, which provides the security.  The caveat to this is that as data leaves the server to the Internet, it is not in the VPN tunnel any longer or encrypted, so anyone on the far side of the server can still intercept traffic that is not secured using some other method, such as SSL.

                Using a VPN is also useful when one does not want their ISP to be able to track where they are going on the Internet, or when you wish to appear like you are coming from a place other than where you physically are.  The latter is useful when you want to watch, say Netflix, and you are traveling overseas.  Due to licensing agreements, the same shows that are available when you are in the US may not be available when you are in, say, Canada, and vice versa.  By connecting to a VPN server in Canada, you are coming from a Canadian location, so you can now watch the Canadian Netflix content.

                Moving the conversation back to using a VPN, marketing would like you to think that using a VPN magically makes all your traffic invisible and safe, but, that’s not true, totally.  When you use a VPN, it shifts the risk down the road, but doesn’t remove it.  Outside of the legal implications of using a VPN to bypass geographic licensing restrictions, it’s dangerous for one to think that their traffic is completely safe just because they are using a VPN.

                First, and foremost, if you are using a VPN for security, you are putting your trust in that company providing the VPN service, because in order for them to provide you this service, they have to encrypt and decrypt the data on your behalf, meaning they have all the access to your data that you are trying to protect.  One issue with this that I see is companies like to sell data in order to turn a profit, and you are giving one company all that access now, even if you are using an SSL based connection, the VPN server has to maintain state and know where you’re going and where you’ve been.  Granted your data itself is safe in the SSL encryption, that’s only part of the story.

                It is possible to run a VPN at home or on a hosted service such as AWS or Azure, where you own the server and control the server completely.  The data is still unencrypted leaving that server, but between your device and the server it’s secure.  For instance, running a VPN server on your home network that you connect into, secures your traffic on any WiFi you may connect to, back to your home network.

               One of the big misconceptions is that VPN’s are associated with security. The perception is that a VPN will render the computer or mobile device immune malware because it is “secure”. Well, it does no such thing. Web page injection, DNS poisonings, script embedding, and any number of other avenues of attack are in no way effected by the use of a VPN.

               All the VPN does is encrypt the data, nothing more. It’s like putting your package inside a lockbox with only a sequence of numbers known only to the provider. Anyone else intercepting the box will have no idea what’s inside the box or what the source/destination is. If they sniff on the other side though, your address as the originator is still able to be deciphered in some cases.

               In closing, I hope that this helps to bring down some of the misconceptions of what a VPN is, what it isn’t, and break through some of the marketing ploys. Also think about this, these services cost money to provide, so how does these “free” VPN providers pay the bills? Well, quite frankly instead of you paying for the product, you are the product in the form of ads and data being sold. Be skeptical of the “too good to be true” situation.

               Be careful, be safe, and have a healthy dose of skepticism.

It’s been a long road…

For those that just started to have the theme song from Enterprise play in your head, you’re welcome!

It’s been a spell since I have been on the interwebs, dumping my mind onto paper (metaphorically) and hoping that someone can get some nuggets of knowledge out of my ramblings. Or at the very least a good laugh.

Some folks that have been here before may be confused by the lack of older posts. Well, I decided to start fresh since it’s been a couple of years, so come back often to see what spills out of this noggin.

Follow me on Twitter to see new posts and updates.