Blog

Previous Next

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

What is Web Caching

Web caching is a way of improving server performance by allowing commonly requested content to be stored in an easier to access location. This allows the visitor to access the content faster instead of having to fetch the same data multiple times.

By effectively creating caching rules, content that is suitable for caching will be stored to conserve resources, while highly-dynamic content will be served normally. In this guide, we will be discussing how to configure Apache using its caching modules.

Note: This guide was written with Apache 2.2 in mind. Changes to Apache 2.4 have lead to the replacement of some of the modules discussed in this guide. Due to this consideration, not all of the steps recommended below will work on Apache 2.4 installations.
An Introduction to Caching in Apache

Apache has a number of different methods of caching content that is frequently accessed. The two most common modules that enable this functionality are called "mod_cache" and "mod_file_cache".
The mod_file_cache Module

The mod_file_cache module is the simpler of the two caching mechanisms. It works by caching content that:

Is requested frequently
Changes very infrequently

If these two requirements are met, then mod_file_cache may be useful. It works by performing some of the file access operations on commonly used files when the server is started.
The mod_cache Module

The mod_cache module provides HTTP-aware caching schemes. This means that the files will be cached according to an instruction specifying how long a page can be considered "fresh".

It performs these operations using either the "mod_mem_cache" module or the "mod_disk_cache" modules. These are more complex caching models than mod_file_cache and are more useful in most circumstances.
Using mod_file_cache with Apache

The mod_file_cache module is useful to cache files that will not change for the life of the current Apache instance. The techniques used with this module will cause any subsequent changes to not be applied until the server is restarted.

These caching mechanisms can only be used with normal files, so no dynamically generated content or files generated by special content handlers will work here.

The module provides two directives that are used to accomplish caching in different ways.
MMapFile

MMapFile is a directive used to create a list of files and then map those files into memory. This is done only at server start up, so it is essential that none of the files set to use this type of caching are changed.

You can set up this type of caching in the server configuration file. This is done by specifying files to be cached in memory in a space-separated list:

MMapFile /var/www/index.html /var/www/otherfile.html var/www/static-image.jpg

These files will be held in memory and served from there when the resource is requested. If any of the files are changed, you need to restart the server.
CacheFile

This directive works by opening handles to the files listed. It maintains a table of these open file descriptors and uses it to cut time it takes to open these files.

Again, changes to the file during operation of the server will not be recognized by the cache. The original contents will continue to be served until the server is restarted.

This directive is used by specifying a space-separated list of files that should be cached with this method:

CacheFile /this/file.html that/file.html another/file/to/server.html

This will cause these files to be cached on server start.
Using mod_cache with Apache

The mod_cache module is a more flexible and powerful caching module. It functions by implementing HTTP-aware caching of commonly accessed files.

While all caching mechanisms rely on serving files in some persistent state, mod_cache can handle changing content by configuring how long a file is valid for caching.

The module relies on two other modules to do the majority of the cache implementation. These are "mod_disk_cache" and "mod_mem_cache".

The difference between these two is where the cache is kept, either on disk or in memory respectively. These are stored and retrieved using URI based keys. This is important to note as you can improve the caching of your site by turning on canonical naming.

This can be accomplished by putting this directive in the server configuration or virtual host definition:

UseCanonicalName On

How to Configure Caching

We will examine some common configuration directives and how they affect the functionality of the caching mechanisms.

If you look in the "/etc/apache2/mods-available" directory, you can see some of the default configuration files for these modules.
Configuring mod_mem_cache

Let's look at the mod_mem_cache configuration:

sudo nano /etc/apache2/mods-available/mem_cache.conf

CacheEnable mem /
MCacheSize 4096
MCacheMaxObjectCount 100
MCacheMinObjectSize 1
MCacheMaxObjectSize 2048

These directives are only read if the mod_mem_cache module is loaded. This can be done by typing the following:

sudo a2enmod mem_cache
sudo service apache2 restart

This will enable mod_mem_cache and also mod_cache.

CacheEnable mem /

The "CacheEnable mem /" line tells apache to create a memory cache for contents stored under "/" (which is everything).

MCacheSize 4096
MCacheMaxObjectCount 100

The next few lines describe the total size of the cache and the kinds of objects that will be stored. The "MCacheSize" directive and the "MCacheMaxOjectCount" directive both describe the maximum size of the cache, first in terms of memory usage, and then in terms of the maximum amount of objects.

MCacheMinObjectSize 1
MCacheMaxObjectSize 2048

The next two lines describe the kinds of data that will be cached, in terms of memory usage. The default values specify that files between 1 byte and 2 kilobytes will be considered for caching.
Configuring mod_disk_cache

We can learn about a different set of directives by examining the mod_disk_cache configuration file:

sudo nano /etc/apache2/mods-available/disk_cache.conf

CacheRoot /var/cache/apache2/mod_disk_cache
#CacheEnable disk /
CacheDirLevels 5
CacheDirLength 3

This configuration is loaded if you enable the mod_disk_cache module, which can be done by typing:

sudo a2enmod disk_cache
sudo service apache2 restart

This command will also enable mod_cache in order to work properly.

CacheRoot /var/cache/apache2/mod_disk_cache
#CacheEnable disk /

The "CacheRoot" directive specifies where the cached content will be kept. The "CacheEnable disk /" directive is disabled by default. It is suggested that you enable this on a virtual host basis to get a better idea of what will be cached.

CacheDirLevels 5
CacheDirLength 3

The other two directives determine the caching structure within the cache root. Each cached element is hashed by the its URL, and then the hash is used as a filename and directory path.

The CacheDirLevel decides how many directories to create from the hash string and the CacheDirLength decides how many characters are in each directory name.

For example, if you have a file that hashes to "abcdefghijklmnopqrstuvwxyz", then a CacheDirLevel of 2 and a CacheDirLength of 4 would lead to this file being stored in:

[path_of_cache_root]/abcd/efgh/ijklmnopqrstuv

Caching that is stored on disk can become large depending on the expiration dates of the content. Apache includes a tool called "htcacheclean" to pare the cache down to a configured size. This is outside the scope of this guide.
Using CacheLock to Avoid Overwhelming the Backend

A problem can arise on a busy server when a cached resource expires.

The cache that needs to be refreshed will have to refetch the file from the normal file resource. During this time, if there are more requests for the file. This can create a huge spike in requests to the backend server as the cached version is being refreshed.

To avoid this situation, it is possible to enable a lock file that indicates that the resource is being recached and that subsequent requests should not go to the backend, because the issue is being addressed.

This lock can prevent apache from trying to cache the same resource multiple times when first caching. It also will serve the stale resource until the refreshed cache is complete.

Three directives are used to control CacheLock:

CacheLock [ On | Off ]
CacheLockMaxAge [time_in_seconds]
CacheLockPath [/path/to/lock/directory]

The first directive turns on the feature and the third directive establishes the directory where resource locks will be created.

The second directive, CacheLockMaxAge, is used to establish the longest time in seconds that a lock file will be considered valid. This is important in case there is a failure or an abnormal delay in refreshing a resource.
Conclusion

Caching in Apache can be simple or involved depending on your needs. While any kind of caching can improve your site performance, it is important to test your configurations to ensure that they are operating correctly.

It is also essential that you are familiar with the repercussions of improperly configured caching. It is sometimes necessary to re-evaluate your security practices after implementing caching to ensure that private resources are not accidentally being cached for public consumption.

The apache user documentation has plenty of information about how to configure caching if you get stuck. Even if you have a handle on the configuration, it is a helpful reference and good resource.

 

source: https://www.digitalocean.com/community/tutorials/how-to-configure-content-caching-using-apache-modules-on-a-vps

Add a comment

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

ubuntu-16.04

Ubuntu hasn’t had the best reputation among Linux users over the past few years–with some even going so far as to call it “boring”. If you’ve been hesitant to try it out, then hold on to your seats–Ubuntu 16.04 “Xenial Xerus” is not only an exciting release, but one that has the potential to be a game changer for the Linux ecosystem.

Ubuntu first leaped into the Linux world in 2004 and with it, completely changing the face of Linux taking it from the days of “only usable by experienced geeks” to the era of “Linux for Human Beings”. Now, 12 years later, they just might be on the verge of repeating that lightning in a bottle that took it from a brand new small project to becoming the most popular distribution of Linux. Ubuntu 16.04 was released today, and with it comes a ton of improvements throughout the distro. There are many changes that improve the usability and experience for the end user as well as potential landmark changes that might pique the interest of even the most skeptical of developers.

The Unity Launcher Can Be Moved to the Bottom of Your Screen

ubuntu-16-05-unity-launcher-bottom

Thanks to the Ubuntu Kylin team, users can now attach the Unity Launcher to the bottom of their screen instead of having to be forced to always have it on the left side. Believe it or not, it’s taken almost 6 years to get this basic feature.

There are a couple of ways to accomplish this, but the easiest way is through one command in the Terminal (though admittedly a fairly long command). Open up your terminal with Ctrl+Alt+T or from the Dash and run the following:

gsettings set com.canonical.Unity.Launcher launcher-position Bottom

You can also revert back to the Left side if you decide later that you don’t like it by running:

gsettings set com.canonical.Unity.Launcher launcher-position Left

That’s all it takes.

Online Dash Results Are Off by Default, and Updates to the “apt” Command

ubuntu-16.04-dash-no-ads

There has been quite a bit of controversy for a couple of years over the online search results in Ubuntu’s Dash. Some people even went so far as to (inaccurately) call them “spyware”. Ubuntu 16.04 puts an end to that controversy by disabling the results by default.

GNOME Software Replaces Ubuntu Software Center

ubuntu-16.04-software-calendar

The Ubuntu Software Center was another blemish on Ubuntu’s name. It was slow, unreliable, and the overall user experience was lacking. Ubuntu 16.04 address this issue by replacing the Ubuntu Software Center with GNOME’s Software solution. Ubuntu adopting GNOME Software is a great sign of more community involvement from Canonical, and that they’re willing to include an alternative piece of software if it’s better overall.

Similarly, Canonical adopted a new Calendar app in Ubuntu 16.04–just another way they’re adopting better software from the GNOME project.

If you’re more of a terminal junkie, 16.04 also adds new features to the “apt” command so you can simplify your command-line package management even further than before. Ubuntu 16.04 sees the addition of apt autoremove which replaces apt-get autoremove and apt purge package(s) which replaces apt-get purge package(s).

Unity 7.4 Is the Smoothest Unity Experience Yet

ubuntu-16.04-unity-7.4

I’ve been testing Ubuntu 16.04 and Unity 7.4 for quite some time now and I have to say, Unity 7.4 is by far the smoothest and best Unity experience I’ve had. I was a hold out for the days of 12.04’s Qt-based Unity, but I’m glad to see that Ubuntu 16.04 has adopted its best features. Here are the most notable changes arriving in Unity 7.4:

  • Shortcuts for Session Management such as restart, shutdown, etc from the Unity Dash
  • Icons appear in launcher while loading applications
  • Ability to move the Unity launcher to the bottom of the screen
  • Online Dash Results are disabled by default
  • App Menus can now be set to ‘Always Show’
  • New scroll bars in Unity Dash
  • External storage/Trash now display number of windows open
  • Quicklist (Jumplist) added to Workspace Switcher
  • Ability to Format a drive within a Unity Quicklist (great time saver but be careful)
  • Alt+{num} can now be used to open External storage items similar to Logo+{num} for opening applications
  • Ubuntu themes have improved Client Side Decorations support.

That’s a lot of good stuff.

ZFS Is Supported by Default in Ubuntu 16.04

ZFS is a very popular filesystem due to its reliability with large data sets, and it has been a very hot topic for the Linux community for years. Canonical has decided that ZFS support is necessary, so Ubuntu 16.04 has added support for ZFS by default. ZFS is not enabled by default, however, which is intentional. Since ZFS is not necessary for the majority of users, it fits best in large scale deployments. So while this is very cool, it’s not going to affect most people.

Ubuntu Snappy Has Potential to Change the Landscape of Linux

ubuntu-16.04-snappy

Finally, Ubuntu 16.04 introduces Ubuntu Snappy to the desktop, a brand new package management solution that has potential to change the landscape of Linux.

Linux-based operating systems come with many different types of release structures, but the two most common are Fixed Releases (aka stable releases) and Rolling Releases. Both of these common structures have pros and cons: Fixed Releases give you rock solid base system, but often with outdated applications that have to be supplemented with something like PPAs. Rolling Releases get you the software updated as soon as possible, whenever a new version is released–along with all of the latest bugs. Ubuntu Snappy is a new release structure that has all of the benefits of both systems combined into one.

Think of Snappy as an alternative to .deb files and PPAs. It’s a new form of app distribution that lets developers send you the latest version of their apps–in the form of “snaps”–as soon as they’re ready. They’re much easier and quicker for developers to push out, and you–the user–don’t have to go hunting for a PPA if the app isn’t included with Ubuntu’s default repository packages. And, if one release is buggy, it’s very easy to roll back to the last stable version.

In addition, snaps install differently than the traditional .deb files you’re used to. Snaps install as “read-only” mountable image based applications, which means you don’t have to worry about whether an app was packaged for Ubuntu 16.04, 16.10, or any other version–that Snap will work on any version of Ubuntu that supports Snaps.

Snappy on the Desktop is still in the early stages, so you won’t be switching to snaps entirely with Ubuntu 16.04. But the groundwork has been laid, and snaps should start to become more common over time. In fact, Ubuntu will be releasing a “Snap Store” of sorts in the future, likely using GNOME Software, making it easier to discover and install apps using Snappy.

Oh Snap! Excitement Is in the Air

I don’t think I’ve been more excited for a new release of Ubuntu since I first started using Linux, many years ago. The potential of Ubuntu Snappy alone has me smiling as I write this very paragraph, but add that to the rest of the changes coming in Ubuntu 16.04 and I’d say Ubuntu has become anything but boring. What do you think of this new release? Is my excitement contagious? Will you be giving Ubuntu 16.04 a try? Let me know in the comments thread.

 

source: http://www.howtogeek.com/251647/ubuntu-16.04-makes-ubuntu-exciting-again

Add a comment

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

About a year ago, I offered a friend of mine some consulting in exchange for customer research. Essentially I wanted to work with him one-on-one to help him grow his business to see if my strategies and techniques were transferable. In return, he let me use his story in my content and he also wrote about the experience.

My friend, Devon, was producing projects for around $5,000, but he wanted more.

His basic offer was for websites made by a web designer.

Which is what I think a lot of us offer. Someone calls me for a website, and my gut reaction is to say, "sure, I make websites! Here's how much it will cost..."

And this was exactly what Devon was doing. He was selling websites—which makes complete sense.
Stop Selling Websites

The first thing that I taught Devon was how to sell a vision. A website by itself is a commodity. I can get a website from company A or B and they are probably going to be relatively similar if they are about the same quality company for the same price. This is where price pressure comes from: the commoditization of websites.

In order to leave the world of commodities, you have to sell something bigger than a website.

When Apple sold the original iPod, they weren't selling an MP3 player (those were already on the market), they were selling 5,000 songs in your pocket. The simple shift in value statements changed the game. The net result was that Apple's product not only cost a lot more, but it became one of the first devices to be sold worldwide with universal love.

Apple proves that with the right vision and value statement, you can sell the same thing for a higher cost at a higher demand.

This is why I started selling Online Businesses. The Online Business Ecosystem is a diverse set of online strategies and tactics that help drive business. The website is the central headquarters in this matrix so, in essence, its the most important asset to invest in.

I teach my customers about how a website's objective is to do something with the traffic it receives—this is called "conversion." I also teach them how to amplify their investment by driving more visitors or "traffic."

All non-website properties help to make up the existence of the ecosystem—supporting it and nurturing the growth of a business. For instance, it doesn't matter what your website says if the review site your customer is searching has the wrong web address...or worse yet, it's cluttered with negative reviews.

So Devon stopped selling websites. He learned to sell online businesses and have the Online Business Conversation with his prospects early and often.
Invest Time Over Time

When I met Devon, he was struggling to break into that $10k magic-zone. He was selling like most of us learn to: A prospective customer would call or email him. He would set up a call, learn about what they wanted, and he would provide an estimate for exactly that.

Then the potential would either:

a) Haggle over price

or

b) Drop off the radar

This happens because there is no relationship in this model. But we don't build relationships by talking about our kids. We build relationships by increasing the number of interactions we have before we ask someone for money.

Instead of investing two hours up front to satisfy a curious notion by a non-buyer, I trained Devon to spend only a limited amount of time with a prospective customer up front. Quickly move to a scheduled qualification meeting three days out. Then move into a multi-meeting discovery phase. Then propose a solution. Finally, after all of that you, get to the proposal.

This sales process took the Online Business Ecosystem paradigm and allowed it to sink into the prospective customer. They now knew they didn't need a website, which is one goal of spreading out your interactions. Give a prospect more time to get to know that you show up on time, have an agenda, and are ever more curious to learn about their business with each meeting. By not selling right away, you build trust.

Which comes in handy when it's time to say, "You need something much bigger than a couple thousand dollar website."
Become Ferdinand Magellan

The final key teaching I delivered to Devon was that of Discovery (with a capital "D"). I taught him that his customer's business, although it might be in a familiar industry that he knows, is uncharted territory. No one really knows everything about a business (including the owner).

When I first met Devon, he did very little discovery. It was all reactive, "what would you like me to do?" This concept of spending at least two complete meetings dedicated to walking through a client's needs changed the game.

By going into a customer relationship with the simple goal of helping the prospective client learn about what is going on in their Online Business, you can deliver value in a way that your prospect probably hasn't seen before.

To say on the first interaction: "I don't know if I can help you. I need to spend time with you and do some research and figure out if I can have the impact we both want," is an amazing wake up to someone being thrown low dollar bids, left and right, by other web firms.

Discovery inherently moves the conversation far away from the technological details. It removes our desire to explain what responsive design is, or how cool the CMS widget is that I have fallen in love with. Although those topics might come out naturally, we lose the need to present them. They become part of the greater conversation of how a certain problem will be solved.
The Proof is in the Pudding

One of the first things that Devon and I did was to create a one-page focus plan. I remember that one of his goals on the plan was to score a $10,000 project early in 2013. It was only a month after setting the goal when he locked in his first $10k. Then he wanted to do three. Then it turned into, "I just got a $20k project."

Devon texted me the other day letting me know how things were going. It was one of those messages where my face lit up in a big, fat smile. I showed my wife and said, "Check this out, Devon has been just crushing it this year!"

How cool. Not just because he's stepped up and made his goals happen, but because Devon has created his own enterprise for himself and his family that matters a lot more than the money. He can work from home, take care of his new twins when he needs to, and start building wealth for himself and his family.

I love to build things. But I also know why I do this. I want to provide for my family while making the world a better place.

I'm glad that the work I do has a positive impact on the people around me. I know his story certainly makes an impact on me.

source: http://www.ugurus.com/blog/how-i-taught-a-web-designer-to-sell-10k-projects

Add a comment

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

It's good to research the meaning of some words. So I did with prank. Got it. From wiki, it has it as below: 

A practical joke is a mischievous trick played on someone, generally causing the victim to experience embarrassment, perplexity, confusion or discomfort. A person who performs a practical joke is called a "practical joker". Other terms for practical jokes include prank, gag, jape, or shenanigan.

Practical jokes differ from confidence tricks or hoaxes in that the victim finds out, or is let in on the joke, rather than being talked into handing over money or other valuables. Practical jokes are generally lighthearted and without lasting impact; their purpose is to make the victim feel humbled or foolish, but not victimized or humiliated. However, practical jokes performed with cruelty can constitute bullying, whose intent is to harass or exclude rather than reinforce social bonds through ritual humbling.

In Western culture, April Fools' Day is a day traditionally dedicated to conducting practical jokes.

Now above is Lecrae's. Dude got pranked! I liked it. Watch it. Lemmi hear you laugh and roll on the floor. 

share with me crazy ones too. 

Add a comment
Previous Next

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive

IT’S HARD TO find anyone who’d argue that websites load too quickly. Mobile pages constantly creak under the weight of complex visual elements and ad networks. It’s led to an ad-blocking boom, boutique speed-boost solutions from Google and Facebook, and now, a system from MIT that its creators claim trims page-load times by up to 34 percent.

Polaris, as its creators call it, is a product of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). And while its benefits vary based on the site deploying it, there’s maybe no comparable technology that’s as effective as it is universal. The only catch? Figuring out how to deploy it to the websites and browsers you use every day.

Putting It Together
The idea for Polaris was first hatched about a year ago, says lead author and MIT CSAIL PhD Ravi Netravali. The breakthrough, after years of thinking through the page load problem, came after he started focusing primarily on mobile.

“Because on mobile networks these delays are much higher than they are on wired networks, that’s where we focused our energy,” says Netravali. Previous high-profile efforts to speed mobile pages, like the SPDY protocol, or Google’s open-source Brotli algorithm, have focused on data compression. That’s helpful when bandwidth is scarce, but in many markets that’s not the most serious impediment to speed. The key isn’t how much comes through the transom, but how many trips it takes to get it there.

The creators of the new Polaris system claim that it trims page-load times by up to 34 percent. To understand how and why Polaris works, it’s important to remember that a web page doesn’t spring forth wholly formed. Every time you type in a URL, the site that eventually materializes comprises a mishmash of JavaScript, HTML, CSS, and more. More over, many of these items are interdependent, and your browser can waste precious seconds deciding in which order it should load which parts, and why. When downloading one object requires fetching even more objects, that’s known as a a dependency.
“If you load a page today, there are hundreds of objects that you have to load. There are shared states between them, they all interact; one object can write for something while the other object reads,” says Netravali. “That dictates the order that a page loads these objects.”

As you might imagine, it’s an inefficient process; the MIT team compares it to figuring out a business travel itinerary on the fly, versus having a list of cities ahead of time to help you plan the most practical route. Polaris provides that list, and acts as a travel agent. It maps all of these dependencies, enabling objects to download in a streamlined fashion, and cutting back on the number of times a browser has to cross a mobile network to fetch more data.

It’s not a cure-all for the entire web. For a relatively austere site like Apple.com homepage, made up primarily of images that don’t depend on one another, Polaris doesn’t show substantive gains next to using plain vanilla Firefox. Then again, sites like that tend to load quickly to begin with. It’s when web destinations get more feature-filled that Polaris really kicks in.

“For the New York Times homepage, Weather.com, these types of sites where there’s a lot of stuff going on, that’s where you see gains,” says Netravali. “When there’s a lot of objects on the page, that’s where Polaris can really help, because it’s important to prioritize some over the others.”

Those objects also extend to advertising network intrusions, which are responsible for much of the bloat that weighs down the web. Facebook’s Instant Articles and Google’s AMP have also tried to speed up pages by mitigating the ad problem, but Polaris acts as a complement to those efforts, without requiring any front-facing changes to the content of either the page itself, or the ads that run on it.

“If it turns out that the ads are very slow, because right now they’re coming super late in the page—which actually happens often, because if I’m CNN and I have an ad, I want it to come later because I don’t care if you see it right away or not—that leads to higher page load times,” says Netravali. “With Polaris, if there are resources available earlier in the page load, and it doesn’t actually interact with other parts of the page, Polaris will say [to the browser] OK, why don’t you get it right now?”

One last Polaris benefit? While it’s not the first dependency-tracker, it’s the first one to be browser agnostic. That means it could hypothetically work on any site, in any browser, through however many software updates. The question now is, will it?

Need for Speed
Polaris works, but not to your benefit. Not yet, anyway. Before it’s deployed in a broader sense, a few things need to happen.

First, websites have to sign on to run the software on their servers to generate the “dependency graphs” that give the JavaScript, HTML, images, and other elements their marching orders. Then, they’d like to convince web clients—the Chromes and Firefoxes and Safaris and Edges of the world—to incorporate Polaris as well.

“We didn’t modify the browser, and the reason for this was we wanted to be browser agnostic,” says Netravali. “In the future, things would be faster than they are today if this were integrated on the browser side.”

The MIT team will find out what kind of appetite their is on the browser end next week, when it officially presents its Polaris paper. The possibilities are intriguing, particularly because it’s the kind of technology that could represent a formidable competitive advantage to one company over another. Being able to promise up to a third increase in speed may be enough to prompt more than a few converts. On the other hand, the more ubiquitous Polaris is on the browser side, the more likely websites will be to go through the trouble of integrating it.

That’s a balance they’ll have to negotiate eventually, but for now Netravali is just focused on getting the word out.

“At the end of the day, our main goal is as many people using this as possible,” he says. With those kinds of performance improvements, let’s hope they achieve it.

source: http://www.wired.com/2016/03/mit-polaris-faster-web-pages

Add a comment

About Me

Oops...Almost forgot to say something about me. But anyway, I'm that guy, yule Msee, who'll sort out your techie issue and hails from the land of milk and honey. Not forgetting the bitter herbs too.

This is what am best at. Feel free to ask something. 

Latest News

Latest Tweets

Search

Loading