Calgarypuck Forums - The Unofficial Calgary Flames Fan Community

Go Back   Calgarypuck Forums - The Unofficial Calgary Flames Fan Community > Main Forums > The Off Topic Forum > Tech Talk
Register Forum Rules FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Search this Thread
Old 11-17-2011, 07:43 AM   #21
sclitheroe
#1 Goaltender
 
Join Date: Sep 2005
Exp:
Default

Quote:
Originally Posted by MaDMaN_26 View Post
I love how you can be so incredibly wrong but yet be so absolute in telling others how wrong they are...
Yeah, that was heavy handed..there's more than one way to look at this issue, so nobody is wrong in the context of this discussion. Sorry about that.
__________________
-Scott
sclitheroe is offline   Reply With Quote
Old 11-17-2011, 10:10 AM   #22
MaDMaN_26
Powerplay Quarterback
 
MaDMaN_26's Avatar
 
Join Date: Feb 2007
Location: Calgary
Exp:
Default

Quote:
Originally Posted by Rathji View Post
Maybe you don't understand what he said then, because nothing you have quoted contradicts it.

I don't want to repost something I have posted many times before, but if all this bandwidth is very cheap since there is no real scarcity then why has Shaw's network performance suffered greatly in some areas since they drastically increased speeds and monthly bandwidth allotment this past spring/summer? Because the network is nearing capacity in those areas, which by definition, means that the resource truly is scarce.

If it wasn't, then network congestion would never exist.

If you show me a network that never has congestion and never has a slowdown, then I will 100% agree with your claims, but until then, you are only considering half of the picture.
I think the study is more about the big picture rather than Shaw's poor choice of topology for the customer demarcation. When I used Shaw in the 90's I know it was a bit of a token ring setup, if you were the house closest to the main demarcation, you had ample speed but every house after that shared its bandwidth with the previous... meaning if anyone signed on and downloaded something it would affect your performance. A token ring of sorts...

I can't imagine they still use that antiquated technology, but it may be something similar... I'm sure the main backbone of the network is not as congested as you feel... but the last mile may be. That would be more about Shaw poorly implementing the last mile solution than the entire network being full... I do not think it would be fair for them to punish their users because they implemented a poor solution. Even more likely they have bottle necked you... using software to to throttle network traffic based on time of day to create a false demand whilst claiming they have no choice. Something this study and the CRTC ruling prove they have no qualms doing... This makes more sense than the network actually being at full capacity.

I just guessing and conjecturing though... if anyone had some real data on this or other reasons why that would be happening I'm listening.

I would strongly suggest you visit here http://openmedia.ca/switch - switch internet providers, you'd still be on Shaw's network but as of this ruling they can not throttle or restrict your bandwidth if your under one of these smaller ISP umbrellas. I'd bet you will be amazed how suddenly you no longer have a problem.
__________________
______________________________________________
http://openmedia.ca/switch
MaDMaN_26 is offline   Reply With Quote
Old 11-17-2011, 10:11 AM   #23
MaDMaN_26
Powerplay Quarterback
 
MaDMaN_26's Avatar
 
Join Date: Feb 2007
Location: Calgary
Exp:
Default

Quote:
Originally Posted by sclitheroe View Post
Yeah, that was heavy handed..there's more than one way to look at this issue, so nobody is wrong in the context of this discussion. Sorry about that.

Same here.
__________________
______________________________________________
http://openmedia.ca/switch
MaDMaN_26 is offline   Reply With Quote
Old 11-17-2011, 02:27 PM   #24
MaDMaN_26
Powerplay Quarterback
 
MaDMaN_26's Avatar
 
Join Date: Feb 2007
Location: Calgary
Exp:
Default

Mixed bag article on the CRTC's ruling

http://www.cbc.ca/news/arts/story/20...c-billing.html
__________________
______________________________________________
http://openmedia.ca/switch
MaDMaN_26 is offline   Reply With Quote
Old 11-17-2011, 02:30 PM   #25
To Be Quite Honest
Franchise Player
 
Join Date: Jan 2010
Exp:
Default

Open Media is a great source for monitoring gov't vs people reliability. I post a lot of their stuff on CP.
To Be Quite Honest is offline   Reply With Quote
The Following User Says Thank You to To Be Quite Honest For This Useful Post:
Old 11-17-2011, 03:59 PM   #26
sclitheroe
#1 Goaltender
 
Join Date: Sep 2005
Exp:
Default

I think one of the complexities in the issue is that really we are talking about two different, simultaneous resources - there's capacity, and there's throughput. One is a measure of quantity, the other of rate. Consumers want both lots of data, and quickly.

As I've pointed out in other threads, everyone could have a 1 TB cap (which is, let's face it, essentially unlimited) if they settled for 3 megabits. A paltry 3 Mbps link will, sustained over 31 days, deliver just about 1 TB of data.

It's likely that the ISPs could sustain 3mbps unlimited to everybody today for a fixed fee, regardless of how their network is currently built out.

But people also want throughput or what is essentially burstable bandwidth (ie. they don't need 100 megabits all the time, but they'd sure like it from 6-10pm nightly when they are home, and weekends too).

Maybe what the ISPs need to do is find a middle ground where we don't pay for usage, we pay a premium for peak periods, or for bandwidth utilization that bursts above some agreeable value. I monitor my bandwidth very closely using detailed tools at home, and my observation is that it is extremely rare for the home network to exceed 5 Mbps utilization, unless I'm engaged in certain very specific activities, like streaming NHL, downloading large files from high capacity sites, etc.

It actually sounds like the CRTC is leaning that way:
Quote:
Tuesday's CRTC decision allows large ISPs to charge their wholesale internet customers either a flat rate independent of usage, or a rate based on network capacity or speed per 100 megabits per second, as well as additional fees for network access and the transit of data through the wholesaler's network. The CRTC set rates for different large ISPs based on their preferred billing scheme and their claimed costs for running the network. It said large ISPs may file an application if they would like rates approved for the other billing scheme also.
__________________
-Scott
sclitheroe is offline   Reply With Quote
Old 11-17-2011, 04:28 PM   #27
Rathji
Franchise Player
 
Rathji's Avatar
 
Join Date: Nov 2006
Location: Supporting Urban Sprawl
Exp:
Default

Quote:
Originally Posted by MaDMaN_26 View Post
I can't imagine they still use that antiquated technology, but it may be something similar... I'm sure the main backbone of the network is not as congested as you feel... but the last mile may be. That would be more about Shaw poorly implementing the last mile solution than the entire network being full... I do not think it would be fair for them to punish their users because they implemented a poor solution.
It is a Hybrid Fiber-Coax, which as you claim is a poor solution.

What makes you say that? Obviously you have some understanding of the technology that would indicate it is inferior to some other option.
__________________
"Wake up, Luigi! The only time plumbers sleep on the job is when we're working by the hour."
Rathji is offline   Reply With Quote
Old 11-17-2011, 07:57 PM   #28
MaDMaN_26
Powerplay Quarterback
 
MaDMaN_26's Avatar
 
Join Date: Feb 2007
Location: Calgary
Exp:
Default

Quote:
Originally Posted by Rathji View Post
It is a Hybrid Fiber-Coax, which as you claim is a poor solution.

What makes you say that? Obviously you have some understanding of the technology that would indicate it is inferior to some other option.
I assume your asking about my statement that token ring is antiquated?

Only in that it lost... my understanding (I may be misinformed) is that in the late 80's and early 90's token Ring and Ethernet were at odds... a little like Beta Max and VHS for lack of a better example. Token ring might have actually been the better solution (Betamax) as it had the potential to provide bigger packets which would mean less processing and therefore congestion at switches etc.. for more throughput data. But it lost the battle to Ethernet, not because of proprietary greed, but because Ethernet technology out paced Token in technological advancements... due to those tech advancements Ethernet is the clear standard... using token ring now would be cumbersome and costly as the technology is not there to support it.

Again I have bits and pieces... its been awhile since I took school on this but I believe when you say Hybrid fiber-coax you are talking about the physical layer and not the network layer where I believe topology falls... not quite the same. ... I think... lol, you may be talking the transport layer actually... bah...\

DSL works on a different topology and perhaps shaw has never changed just due to logistics... DSL connections go directly to each home from the demarcation point. There is no adverse affect from your neighbor downloading the bit torrent Jenna Jameson freeleech pack. It does not affect you and once the data hits the the fiber optic demarcation point there is no chance its maxing out the network capacity. This is the point in where the study comes into play. Technology has allowed the existing physical layer on the backbone of these large networks to handle way more data than they could have on that same fiber a few years ago. Cable will always be burst faster but never be as consistent.
__________________
______________________________________________
http://openmedia.ca/switch

Last edited by MaDMaN_26; 11-17-2011 at 08:14 PM.
MaDMaN_26 is offline   Reply With Quote
Old 11-17-2011, 08:21 PM   #29
MaDMaN_26
Powerplay Quarterback
 
MaDMaN_26's Avatar
 
Join Date: Feb 2007
Location: Calgary
Exp:
Default

Interesting side note for DSL users... something I did not know until recently.

this cable is the maximun length your ADSL modem should be from the Demarcation point in your home. 14 feet. If you plug you DSL modem into you phone jack on the third floor you most likely lose speed, especially upload, as direct result... I have my modem on this cord patched directly to my wall panel in the basement. It was easy becuase i do not bother with a home phone line and so have what they call a "dry loop"

once I found this out my upload speed increased by about 30% download by about 10-20% -ish. My own personal experience... but if you are having speed issues, its worth a shot.
__________________
______________________________________________
http://openmedia.ca/switch
MaDMaN_26 is offline   Reply With Quote
Old 11-17-2011, 10:11 PM   #30
sclitheroe
#1 Goaltender
 
Join Date: Sep 2005
Exp:
Default

Quote:
Originally Posted by MaDMaN_26 View Post
I assume your asking about my statement that token ring is antiquated?

Only in that it lost... my understanding (I may be misinformed) is that in the late 80's and early 90's token Ring and Ethernet were at odds... a little like Beta Max and VHS for lack of a better example. Token ring might have actually been the better solution (Betamax) as it had the potential to provide bigger packets which would mean less processing and therefore congestion at switches etc.. for more throughput data. But it lost the battle to Ethernet, not because of proprietary greed, but because Ethernet technology out paced Token in technological advancements... due to those tech advancements Ethernet is the clear standard... using token ring now would be cumbersome and costly as the technology is not there to support it.

Again I have bits and pieces... its been awhile since I took school on this but I believe when you say Hybrid fiber-coax you are talking about the physical layer and not the network layer where I believe topology falls... not quite the same. ... I think... lol, you may be talking the transport layer actually... bah...\

DSL works on a different topology and perhaps shaw has never changed just due to logistics... DSL connections go directly to each home from the demarcation point. There is no adverse affect from your neighbor downloading the bit torrent Jenna Jameson freeleech pack. It does not affect you and once the data hits the the fiber optic demarcation point there is no chance its maxing out the network capacity. This is the point in where the study comes into play. Technology has allowed the existing physical layer on the backbone of these large networks to handle way more data than they could have on that same fiber a few years ago. Cable will always be burst faster but never be as consistent.
This is a superb introduction to DOCSIS technology, and an absolute must read if you have any interest in networking at all:

http://arstechnica.com/business/news...net-access.ars

I bet you had no idea your TCP/IP packets are travelling inside a QAM encoded VIDEO stream..that's right, your internet access comes to you via a digital video signal!
__________________
-Scott
sclitheroe is offline   Reply With Quote
The Following User Says Thank You to sclitheroe For This Useful Post:
Old 11-17-2011, 10:57 PM   #31
MaDMaN_26
Powerplay Quarterback
 
MaDMaN_26's Avatar
 
Join Date: Feb 2007
Location: Calgary
Exp:
Default

Quote:
Originally Posted by sclitheroe View Post
This is a superb introduction to DOCSIS technology, and an absolute must read if you have any interest in networking at all:

http://arstechnica.com/business/news...net-access.ars

I bet you had no idea your TCP/IP packets are travelling inside a QAM encoded VIDEO stream..that's right, your internet access comes to you via a digital video signal!
huh, first paragraph... definitely part of the overall problem.

Quote:
The ideal way to build a national broadband network for access to the Internet would be with a high-bandwidth, bidirectional cable running to each individual household. But sometimes you have to work with what you've got, and in America, what we have are cable TV networks. These networks have the bandwidth, but not the bi-directional part—they weren’t originally intended for two-way communication. Worse, the cables for many neighbors all connect together, so it's not possible to send a signal to just one household. And yet, cable companies manage to provide 100 Mbps bandwidth to their broadband customers using this flawed infrastructure, and they do it without compromising the preexisting cable TV service. The tech behind this magic trick goes by the name of DOCSIS, which stands for Data Over Cable Service Interface Specifications.

*************
I will read the whole thing, thanks Sclitheroe.
__________________
______________________________________________
http://openmedia.ca/switch
MaDMaN_26 is offline   Reply With Quote
Old 11-18-2011, 06:41 AM   #32
Rathji
Franchise Player
 
Rathji's Avatar
 
Join Date: Nov 2006
Location: Supporting Urban Sprawl
Exp:
Default

Quote:
Originally Posted by MaDMaN_26 View Post
huh, first paragraph... definitely part of the overall problem.
Unless I am misunderstanding something, so far you claim that Shaw is having problems ensuring a congestion free network because, for the last leg of thier delivery they are using a broadcast system as opposed to a star topology.

What is your solution then that would improve that situation?

edit: removed the slightly snarky bit - I should refrain from posting before I have had some caffeine in the morning.
__________________
"Wake up, Luigi! The only time plumbers sleep on the job is when we're working by the hour."

Last edited by Rathji; 11-18-2011 at 08:54 AM.
Rathji is offline   Reply With Quote
Old 11-18-2011, 08:30 AM   #33
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

Quote:
Originally Posted by MaDMaN_26 View Post
DSL works on a different topology and perhaps shaw has never changed just due to logistics... DSL connections go directly to each home from the demarcation point. There is no adverse affect from your neighbor downloading the bit torrent Jenna Jameson freeleech pack. It does not affect you and once the data hits the the fiber optic demarcation point there is no chance its maxing out the network capacity.
There's still finite bandwidth to the demarcation point, so it can still be saturated.

Regardless of topology, it's still an aggregate structure where each layer has less available bandwidth than the layer above, by necessity.

It still comes down to having adequate infrastructure to support the # of users downstream.

Quote:
Originally Posted by MaDMaN_26 View Post
Cable will always be burst faster but never be as consistent.
Simply untrue, unless you can give a technical reason why?
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
Old 11-18-2011, 06:05 PM   #34
MaDMaN_26
Powerplay Quarterback
 
MaDMaN_26's Avatar
 
Join Date: Feb 2007
Location: Calgary
Exp:
Default

Quote:
Originally Posted by Rathji View Post
Unless I am misunderstanding something, so far you claim that Shaw is having problems ensuring a congestion free network because, for the last leg of thier delivery they are using a broadcast system as opposed to a star topology.

What is your solution then that would improve that situation?
I'm not saying Shaw internet users are doomed or that they all experience problems all the time, I'm just suggesting the way their network is set up they are prone to have slow downs and I believe it is mainly due to the last mile. The obvious solution would be to change the last mile, but I don't see that happening any time soon, probably not ever.

Given they have no plans to change the last mile on their existing network I don't think the solution is just charging people more money. Collecting more money and leaving their customer with the same infrastructure problems doesn't do anything except make shaw profit margins grow.

Quote:
Originally Posted by photon View Post
There's still finite bandwidth to the demarcation point, so it can still be saturated.

Regardless of topology, it's still an aggregate structure where each layer has less available bandwidth than the layer above, by necessity.

It still comes down to having adequate infrastructure to support the # of users downstream.
It can become saturated... But I think its highly unlikely, I just don't think a DSL hub is going to get over loaded by an average neighborhoods internet use. If one did, consistently, its much more affordable for the provider to upgrade the switch at the one spot to eliminate the problem for an entire neighborhood...


Quote:
Originally Posted by photon View Post
Simply untrue, unless you can give a technical reason why?
Maybe my "Always" was a bit presumptuous... the future is unknown... but I don't think I'm flat out wrong. When I said that I was thinking in terms a throughput capability... Cable can handle higher speeds than DSL and because to the topology of the last mile DSL users are far less likely to have slow downs or performance degradation as their neighbors activity has no impact on them and so therefore a slower at times but more consistent service or experience...

http://compnetworking.about.com/od/d...eedcompare.htm
__________________
______________________________________________
http://openmedia.ca/switch
MaDMaN_26 is offline   Reply With Quote
Old 11-18-2011, 07:54 PM   #35
Rathji
Franchise Player
 
Rathji's Avatar
 
Join Date: Nov 2006
Location: Supporting Urban Sprawl
Exp:
Default

Quote:
Originally Posted by MaDMaN_26 View Post
I'm not saying Shaw internet users are doomed or that they all experience problems all the time, I'm just suggesting the way their network is set up they are prone to have slow downs and I believe it is mainly due to the last mile. The obvious solution would be to change the last mile, but I don't see that happening any time soon, probably not ever.
To what? You seem to thing the solution they have is bad, yet don't seem to understand the network technologies or have any alternate solution. Please correct me if I am wrong.

Every network in the world has points of congestion, unless they are over speced for their purpose. You can't just say that Shaw should remove all points of congestion without expecting a raise in price. All residential internet infrastructure is drastically oversubscribed. That's how it is possible to get a connection for as cheap as they are able to offer it. If you have ever contacted any ISP about dedicated pipe with guaranteed bandwidth (because that is the only real option you have compared to what residential customers have now), and seen how massive that price is compared to what Shaw customers are paying now, you would crap your pants.

You claim to want these things, but don't realize how much it costs.
__________________
"Wake up, Luigi! The only time plumbers sleep on the job is when we're working by the hour."
Rathji is offline   Reply With Quote
Old 11-18-2011, 10:49 PM   #36
Azure
Had an idea!
 
Azure's Avatar
 
Join Date: Oct 2005
Exp:
Default

How expensive would it be to lay fiber in the last mile, and what are they putting in now?

Obviously with the speeds Shaw is capable of putting through, they have more than the old cat5 cable running down the last mile.
Azure is offline   Reply With Quote
Old 11-19-2011, 10:25 AM   #37
photon
The new goggles also do nothing.
 
photon's Avatar
 
Join Date: Oct 2001
Location: Calgary
Exp:
Default

Quote:
Originally Posted by MaDMaN_26 View Post
Maybe my "Always" was a bit presumptuous... the future is unknown... but I don't think I'm flat out wrong. When I said that I was thinking in terms a throughput capability... Cable can handle higher speeds than DSL and because to the topology of the last mile DSL users are far less likely to have slow downs or performance degradation as their neighbors activity has no impact on them and so therefore a slower at times but more consistent service or experience...

http://compnetworking.about.com/od/d...eedcompare.htm
This is all based on the assumption that the infrastructure and size of the subnet are insufficient to handle the activity of all the users in the case of cable and that it is in the case of DSL.

Again that's not something intrinsic to the topology, it's just something that's based on what bandwidth is desired and what hardware is available.

There may be an argument to be made about cost for Shaw splitting subnets and having to add hardware, but that's an economics argument not a technology argument. Telus has the same challenge, as they increase their speeds they seem to be moving their demarcation points down closer to the user.

In my personal experience I can only recall once where my Internet wasn't capable of being maxed out during prime time, and that was only for a few months while Shaw split the network in my area. Other than that I max out my 100Mbit at 6 or 7pm all the time.

ETA: A quick look at DLSReports shows threads about Telus clients complaining of slowdowns during peak hours as well.
__________________
Uncertainty is an uncomfortable position.
But certainty is an absurd one.
photon is offline   Reply With Quote
Old 11-19-2011, 09:44 PM   #38
Rathji
Franchise Player
 
Rathji's Avatar
 
Join Date: Nov 2006
Location: Supporting Urban Sprawl
Exp:
Default

Quote:
Originally Posted by Azure View Post
How expensive would it be to lay fiber in the last mile, and what are they putting in now?

Obviously with the speeds Shaw is capable of putting through, they have more than the old cat5 cable running down the last mile.
They run coax, hence the coax part of Hybrid Fiber Coax.

The networking technology isn't an issue here. It is a red herring at best. Bandwidth is not an infinite resource. You increase it and users will find a way to fill it up. Always.

You can argue all day that the business model that Shaw/Telus/Bell and the CRTC is fostering is bad, and you might have a leg to stand on, but to say the problem lies in the fact that they have a limited network capacity is just ridiculous, especially if you are claiming in the same breath that it "only costs 1-3 cents per GB."
__________________
"Wake up, Luigi! The only time plumbers sleep on the job is when we're working by the hour."
Rathji is offline   Reply With Quote
Old 11-19-2011, 10:21 PM   #39
sclitheroe
#1 Goaltender
 
Join Date: Sep 2005
Exp:
Default

Quote:
Originally Posted by MaDMaN_26 View Post
I'm just suggesting the way their network is set up they are prone to have slow downs and I believe it is mainly due to the last mile.
The last mile is not really the issue with any current residential internet infrastructure, in my opinion. You take 10 households at 100 megabits each, which is easily achievable with current last mile technology, and now you need to lay a gigabit pipe from head office to a demarc point the size of maybe two blocks, if you don't want slowdowns when they all light it up.

On top of this, all networks suffer as utilization increases. As you serialize multiple end points on to higher capacity trunk lines, latency goes up and overall throughput suffers. You can combat it with multiple trunks, and a multi-tier network, but then you incur additional switching latency. There's no avoiding it, other than to not run your networks at peak utilization.

You'll never get away from slowdowns at peak times because its completely unfeasible to not over-subscribe the networks at the neighborhood and larger levels.
__________________
-Scott
sclitheroe is offline   Reply With Quote
The Following User Says Thank You to sclitheroe For This Useful Post:
Old 11-30-2011, 01:01 PM   #40
Tinordi
Lifetime Suspension
 
Join Date: Nov 2010
Exp:
Default

Further to my points previous:

http://www.fiberevolution.com/2011/1...ong-users.html

Quote:
Assuming that if disruptive users exist (which, as mentioned above we could not prove) they would be amongst those that populate the top 1% of bandwidth users during peak periods. To test this theory, we crossed that population with users that are over cap (simulating AT&T’s established data caps) and found out that only 78% of customers over cap are amongst the top 1%, which means that one fifth of customers being punished by the data cap policy cannot possibly be considered to be disruptive (even assuming that the remaining four fifths are).

Data caps, therefore, are a very crude and unfair tool when it comes to targeting potentially disruptive users. The correlation between real-time bandwidth usage and data downloaded over time is weak and the net cast by data caps captures users that cannot possibly be responsible for congestion. Furthermore, many users who are "as guilty" as the ones who are over cap (again, if there is such a thing as a disruptive user) are not captured by that same net.

In conclusion, we state that policies honestly implemented to reduce bandwidth usage during peak hours should be based on better understanding of real usage patterns and should only consider customers’ behavior during these hours; their behavior when the link isn’t loaded cannot possibly impact other users’ experience or increase aggregation costs. Furthermore, data caps as currently implemented may act as deterrents for all users at all times, but can also spur customers to look for fairer offerings in competitive markets.
Tinordi is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 03:52 PM.

Calgary Flames
2024-25




Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright Calgarypuck 2021 | See Our Privacy Policy