Jump to content

CriticalityEvent

S4GRU Premier Sponsor
  • Posts

    166
  • Joined

  • Last visited

Everything posted by CriticalityEvent

  1. http://forum.xda-dev...1&postcount=470 Kernel version: 3.4.10-g014d33e Basband version: 1.12.11.1119 http://forum.xda-dev...5&postcount=450 PRI Version: 2.87_003 http://forum.xda-developers.com/showpost.php?p=35402435&postcount=431 Yeah, it has that "Sprint Connection Optimizer" line in it that people saw in that document "leak" a month or so back and people wondered if that had something to do with an LTE fix.
  2. Oh yeah, I know about that (I’ve posted in there a few times), I had just never heard of panels on “trees.” I did slightly derail the thread with that post, my apologies.
  3. My head actually cocked to the side when I read that as well! Aside from the issues with anchoring panels to something that is alive and potentially growing, I couldn’t figure out how something like downtilt would work on something that has the rigidity of a real tree. Can you imagine what your service would be like in a storm? “I have signal! No, wait neverm… THERE! Oh, gone again.” It’d be like a giant metronome. http://waynesword.pa...du/faketree.htm These fakes are pretty spot-on from a distance! This is not good news for my driving skills since S4GRU.com has taught me to keep an eye out only for conspicuous panels. Now I'm going to start questioning everything.
  4. So, because your phone costs more, you’re going to use it more? I mean, that makes sense to a point, but it makes no sense if you’re at home because the increased cost of your phone’s service ensures that you can take it ANYWHERE. I think that most home ISPs plans are cheaper than a standard smartphone plan these days. When you do get LTE at your home (and you WILL), and it clocks in at 25-35 Mbps, are you still going to use that over your 8-10 Mbps Wi-Fi connection? Like I asked someone else before, given that your latency will more than likely be lower over your home connection, what can you do with 25-35 that you can’t do with 8-10? Right? I wish more people understood what WiMAX was really about: http://s4gru.com/ind...otection-sites/ I can still see people of this mentality not being able to connect their irresponsible abuse with the instituted data caps. It’ll just be another thing to blame Sprint for. That’s going to irk me more than the P!nk lyric asking “where’s the rock n’ roll?” in reference to music on the radio these days (though that’s like bin Laden asking “what happened to the NYC skyline?”).
  5. But if she knew what was good for her, she'd stay in the kitchen making us our Ice Cream Sandwich. Because, you know, she's asking for it otherwise. http://www.s4gru.com/index.php?/topic/2626-Jelly-Bean-for-our-HTC-EVO-4G-LTE??
  6. You are correct; you are absolutely under no contractual obligation to use Wi-Fi. What we are advocating here is the promotion of offloading for the benefit of everyone, including you. This is…strange. It sounds like an issue that should be posted in XDA, but for now, I would agree that you should do what you could to use the services that you want. Once you do get LTE again, and you are in a trusted Wi-Fi area, why use LTE? I know you are of the mentality that “you paid for it, therefore you should use it,” but what of your home internet connection? Did you not pay for that as well? No, it’s not lopsided. You are offloading when you can so others can use the cellular network when they cannot offload. When they have the ability to offload, they will, which will free up the network to allow you to use it when you don’t have the ability to offload. Looking at metropolitan areas, we have hit the limit on road area, just as we are hitting the limit on spectrum. You’re going to have an easier (and cheaper) time getting around Manhattan in the subway than you are on the roads. The last wireless revolution (3G) didn’t come about from congestion, and the next will come about regardless of how the networks are treated. Treat them badly, and you end up with Sprint's 3G circa 2008-present and data caps. Most of these arguments have been made before, read these: http://s4gru.com/ind...3557#entry73557 http://s4gru.com/ind...4043#entry74043
  7. Oh, believe me, this I know. A friend is looking to get rid of her SIII when she gets her work phone, and she’s keeping it really wrapped-up, so I might take it off her hands in the next week or so. Fortunately, my time is spent mostly in/around the city, with work having the lowest site density of anywhere I frequent. Even at work, however, my signal is pretty consistent in my part of the building. But, like you said elsewhere, it’s very much a “metro-only phone.” No, I’ve never experienced that. In fact, I’ve never even heard of that. Did I completely glaze over some major issue that people are having?
  8. Even with the EVO LTE’s connection issues, now that the Chicago market is more filled-out, the handoffs (that I’ve seen while using the phone) are seamless and the throughput is like riding a unicorn on LSD. A tower by my house must’ve been accepted yesterday because I noticed some really nice signal strengths. Ran a few speed tests, they were all in the 25-35 Mbps range (8-13 Mbps up, 42-59 ms ping). Still, going to offload to my 20 Mbps Comcast connection while at home. 0:-)
  9. I understood that you referenced Sprint devices that have the *ability* to run on GSM networks, but read this again and pay attention to the information that he’s trying to convey: He started by using AT&T’s *technologies* to show examples of 2G, 3G, and 4G networks. From there, he went on to state that all of AT&T’s LTE phones can use all of AT&T’s technologies. To tie it into something that we as a Sprint forum can relate to, he listed the 3G and 4G technologies that Sprint uses. Regardless of whether or not a Sprint device can run on a GSM network, while that device is connected to the Sprint network, it will only run on EV-DO, WiMAX, and/or LTE. Again, I understand that the devices you listed have the *ability* to connect to GSM networks, but they will not use any GSM technology while connected to Sprint (unless you consider LTE a GSM technology, but that was a whole other thread).
  10. I think that he was referring to Sprint devices that run on Sprint’s network types, those types being EV-DO, WiMAX, and LTE. His first line… He’s saying that “G” stands for “generation,” but that the word “generation” (and by extension, “G”) does not refer to any single type of technology or protocol (e.g., “3G” can encompass EV-DO or HSPA).
  11. I wouldn’t feel too bad; this debate is mainly arguing for people to offload when it’s completely reasonable (e.g., at home or at a trusted friend’s place). Those aren’t great choices for home internet, so nobody is expecting you to offload. Paying for a hotspot service is also perfectly acceptable, but since it’s limited, we’d understand having to shift more usage to your cellular connection. The only time people here would take exception to someone’s usage would be if the person rooted their phone to tether through it and was burning through 10+ GB/month. My dad’s girlfriend got an iPhone around the time AT&T began capping plans. By that point, she had already asked me how to hop on Wi-Fi networks because her service was pretty much unusable. What was bad was that had her son not told her about Wi-Fi, she wouldn’t have even known to ask. The congestion in the area has since died down, so she has usable service again, but she still prefers to use Wi-Fi when available. I totally agree. The effect of smartphones on cellular providers can be likened to Zippo or Bic replacing their customers’ lighters with flamethrowers. Once you have that, why bother with using the furnace at home?
  12. I had a couple of interesting occurrences, but keep in mind this is with my EVO LTE, so these might be related to the connectivity issues. Sitting at a bar last week between 9:30 and 11:00pm, I noticed that my phone was randomly switching between 3G and 4G. Looking at the RSRP/RSRQ values, it would either hover between -111/-8 and -107/-7 or between -93/-20 and -95/-20. While it was on 3G, I tried switching to “LTE only” mode and did the airplane mode dance, but couldn’t get it to connect. I recently began getting LTE at work (X-D), but mostly just around my desk. At my desk, I typically see values around -107/-7 RSRP/RSRQ, respectively. I usually lose the signal as I go deeper into my building. For grins, I switched the phone to “LTE only” mode before going to a room where I always lose signal. The phone held on, hovering at around -120/-20 with speeds of 3.5 to 4.6 Mbps down and 1.2 to 3.9 Mbps up. However, while still in this room, I switched it back to the standard “CDMA + LTE/EvDo auto” mode and couldn’t get LTE to connect again, even after cycling through airplane mode. “LTE only” mode didn’t let me reconnect, either.
  13. It's a known bug. http://s4gru.com/index.php?/topic/2382-password-on-tapatalk/page__hl__tapatalk__fromsearch__1 Workaround here: http://s4gru.com/index.php?/topic/693-sponsors-forum-via-tapatalk/ Basically, just set "The S4GRU Club" as a favorite and access it under your "Favorites" menu.
  14. THIS. I was thinking the exact same thing. As soon as I saw these numbers, I began thinking of ways to incorporate this thread into the offloading debate thread. Sent from my EVO using Tapatalk 2
  15. For the sake of argument, I will meet you half-way for a minute and agree with the bolded section. Based off what you have said before, you will pretty much always use the faster connection, regardless of whether or not you have Wi-Fi available. What can you do on a 20 Mbps LTE connection that you can’t do on a 5-10 Mbps Wi-Fi connection? Take into account that the Wi-Fi connection will likely have a significantly lower latency than the LTE connection. Kind of the same argument twice here; “Why bother using anything else if the cellular network isn’t overloaded?” I think this article that you posted a while back deserves another visit: Here are some excerpts that I like: Up until 2008, 3G was GREAT, then it began to buckle under the stress of all the new subscribers. The honeymoon was over, and I wanted to punch Sprint in the face. The backlash from people who shared my frustration forced the carriers to start putting Wi-Fi on their phones. Great, problem solved, right? Wrong. Now you have people who have gotten in the habit of not using Wi-Fi and either don’t know how to offload or don’t care to put forth the effort of a few flicks of the finger to offload. More people were still jumping on the smartphone bandwagon so the quality of service continued to deteriorate. Now comes along WiMAX and LTE. I’m glad you admit that these are not silver bullets in the fight against network congestion. Even these will get overloaded, just like 3G. Do you like your 3G right now? How about being forced to use it in its current state for the last 3 years (when you didn’t have WiMAX available)? This was a lot, so I’ll sum up my questions: 1.) What can you do on a 20 Mbps LTE connection that you can’t do on a 5-10 Mbps Wi-Fi connection? Take into account that the Wi-Fi connection will likely have a significantly lower latency than the LTE connection. 2.) Given that less than half of the U.S. currently uses smartphones, do you think that we’re not doomed to repeat history? 3.) Is it not worth pushing for more diligent Wi-Fi usage to avoid repeating what happened to 3G? I will admit that we might not see sub-150 Kbps speeds again, but with LTE, sub-1 Mbps speeds might become equally excruciating in 7-8 years.
  16. Ok, apologies in advance for this barrage of questions, but I was trying to tie it all together here and needed a couple of things cleared up. Is the “theoretical airlink connection” that you’re referring to the maximum speed per sector? I would like to tie this into imekul’s question: If each tower does have a total of 110 Mbps shared among three sectors, then would the backhaul only need to support a maximum of 110 Mbps? If this is the case, does it invalidate lilotimz’s comment about towers having a capacity of 300 Mbps? Does this mean that each legacy tower had a total backhaul capacity of 4.5 Mbps? If that’s the case, then per AJ’s comment… …is it theoretically possible to use over half of a legacy tower’s capacity from a single sector?
  17. Is this the kind of jitter that you’re referring to? http://en.wikipedia....delay_variation So, if I understand this, if I were to run a latency test while on a Level 3 network to a given server, it would be more likely that I would see a better result than if I ran the test on Sprint’s network to that same server, right? Since Level 3 has 2,703 peers compared to Sprint’s 1,316, is Level 3 likely to have a more direct connection to the test server than Sprint? Is this because the network determined that the path the data came in on was no longer the optimal path to return it on? Could the conditions of the original path have changed in that short amount of time? This looks really interesting! I’m just having a hard time understanding exactly what it’s telling me… What are the address blocks that I’m supposed to plug into it?
  18. Really? You're saying that people with cable connections have the same bandwidth available to them at all points during the course of a day? I think you're taking my 10-person example a little too literally here; obviously no tower is going to be overloaded by 10 people (I think). And then we get to this. The current state of Sprint's unlimited network. The person who wants to stream HD video via his cellular connection can't because of of the overloading. I should apologize at this point as I meant to take my example a step further and say there are two groups; one 10-person group with access to Wi-Fi and a 5-person group forced to use the tower. The people with access to Wi-Fi might be doing something that's less data-intensive, but collectively, it has a noticeable impact on the other group because they're refusing to offload. Let's say that the 10-person group is just doing light browsing, but the 5-person group wants to stream something. The 10-person group consumes equal or more resources than the 5-person group, but each individual might only notice a slight lag when loading a page. This effect is more pronounced in the 5-person group by manifesting itself as vastly increased buffer times. In short: no one drop believes that it is responsible for the flood. Why the hell would one not use a 5 Mbps Wi-Fi connection with a 20 ms ping vs. a 25 Mbps LTE connection with a 75 ms ping? Can these phones render pages fast enough to notice a difference in the time it takes to download? I don't think so. Can these phones still react fast enough to take advantage of lower latencies? I think so. The battery conservation is just gravy.
  19. After catching myself up on this thread, I feel comfortable sharing this observation: the vast majority of people here that are extremely well-versed in network implementation/deployment support the “offload when possible” mentality. When I am not the subject matter expert in something, I tend to defer to the experts’ judgment if it is something that I don’t have the time to learn myself. Interestingly enough, once I educate myself on the subject, I almost always end up coming to the same conclusion as the experts (not that “educating myself” necessarily makes me an expert). I want to try summarizing the arguments and examples into points, that way, we can reference them directly and hopefully stop going in circles. I’ll get us started with a couple, and I would encourage you to reference the number to keep it organized. Mainly, I’d like to hear from people that, for whatever reason, dislike or have argued against the idea of offloading. 1.) If 10 people are connected to a cell tower and one person is streaming HD video to their phone, the other 9 will notice it more than if the same 10 people were connected via their respective (wired) broadband connections (even assuming they have the same home ISP and connect through the same local hub/switch). All of the network gurus have pretty much come to a consensus that a wire-backed home ISP infrastructure can handle several high-speed transfers in an area with MUCH less strain than a cell tower under similar loads. This means that if everyone had the same mentality that some on here have towards offloading, there WILL be a noticeable difference in the quality of service. 2.) If your home connection can give you a minimum of 1.5 Mbps (or whatever you need to stream HD video to your phone), even during peak hours, and the latency is equal to or less than that of your cellular network, you will notice no difference unless you are performing a speed test (in which case, who cares?). 3.) Given points 1 and 2 are true, there is no disadvantage to using your home’s Wi-Fi connection. In fact, you may benefit from the decreased battery usage since your phone isn’t communicating with a tower that’s potentially miles away. 4.) You pay for Sprint’s service, which you are entitled to use, but you also pay for your home ISP. Whatever data-intensive task you are performing likely requires more than a few keystrokes, so the extra press of the Wi-Fi button on a widget prior to executing that task can’t possibly be terribly inconvenient (I realize the few iPhone people in this thread don’t have this option, but accessing the settings is stupid easy in iOS). But do you want to know why all these points could be invalid? BECAUSE SPEEDTEST!!!
  20. Haha, sorry… didn’t mean to come off so kiss-assy, I do appreciate the explanation, though! I look forward to reading that post!
  21. Due to how TCP functions, latency plays a big role. Latency, jitter (worse on mobile networks), packet loss, etc. all affect what you can get. Just because you are physically located in say Bozeman, Montana doesn't mean the Bozeman server would offer the best performance. Your provider may haul you all the way back to Westin in Seattle before interconnecting with other carriers and thus to the speedtest site. To bring a bit more localness to it, if he is in Arlington Heights, his signal will travel to the MSC (or LTE core) responsible for that site (LTE cores do make it more interesting because they're dynamic). From there it travels over Sprint's network (this part I'm fuzzy on) to wherever Sprint interfaces with the rest of the world. In Chicago, these locations are likely 600 S. Federal and 350 E. Cermak. If a speedtest server sitting on a network that BGP currently prefers is in Joliet, that is where you're likely to get the best performance. It is a lot further away than an Arlington Heights server, but how the Internet is connected is a much bigger factor than physical location. My credentials for this are that I have my own ISP with microwave backhaul and I have equipment in the major Internet exchange points in Chicago. First of all, my post sounded more than a little arrogant, so I apologize for that. I had no doubt that you knew what you were talking about; despite your low post count (at the time of the post), it was pretty obvious that you knew a thing or two about network technology. You may have resurrected a lot of old posts, but at least you made use of the “MultiQuote” feature here and obviously have a genuine desire to contribute and not artificially increase your post count. As for your explanation, thanks! I think I have an idea what you’re saying, but I’m still a little hazy on how the whole thing works. I was under the impression that a latency test was very distance-dependent (hence my “can’t beat physics” comment). As an example, here’s how I thought a latency test worked (and this is probably totally wrong): Imagine a very long-distance, rudimentary network is set up between New York City and Los Angeles (straight-shot distance: ~2500 miles). If the network hardware itself somehow didn’t impact the test, the latency would still be a minimum of 27 ms (2500 mi / 186000 mi/s)*2. That number would go up because of the time it takes to change the signal from one form to another (wired -> microwave, for example). As I wrote that, I began to realize that if you don’t take distance into account, you don’t get a true measure of how fast the network can get a response, which is what I think your explanation covers, right? If I’m in Arlington Heights and run a latency test to a server in Joliet, would the packet travel like this? Phone->Tower->MSC->BGP->Server->BGP->MSC->Tower->Phone. If so, is the latency a measurement of the time it takes to go from the BGP to the server and back to the BGP (bolded section)? Just to be clear, is the BGP the Sprint/Internet interface that you referenced?
  22. So I have a couple of questions that I've been sitting on for a while, but I think that my questions are only valid assuming that I've pieced the following information together right: Based off of what I've read on S4GRU, it seems that Sprint has a greater site density than the other carriers. This is, at least in part, due to Sprint running only on 1900MHz for 3G which does not travel or penetrate as far as AT&T’s 850MHz or Verizon’s 800MHz (both also use 1900MHz for 3G). We know that Sprint has roughly 38,000 towers and T-Mobile has about 35,000. T-Mobile also uses 1900MHz in addition to 1700/2100MHz for 3G. 1.) Do Verizon and AT&T have lower density, but an overall equal or greater number of towers since they seem to cover rural areas better than Sprint and T-Mobile? 2.) If Sprint has more towers in a given market, then would outfitting 80% of those towers with 800MHz LTE still give them the same number of towers that other carriers have that transmit LTE on 700MHz LTE in that market? I also remember you stating that Sprint’s 800MHz LTE would still have 97% of the distance/penetration characteristics as the other carriers' 700MHz LTE, so we can assume they're about equal in that regard. 3.) Will Verizon and AT&T deploy LTE on all of their frequencies (700, 1700/2100MHz for both) across all of their towers at some point? Do they have a Network Vision-like strategy?
  23. Ahh, gotcha, thanks! So it’s going to be more like a large, contiguous swath across part of the U.S. missing 800MHz LTE rather than an evenly-distributed 4 out of 5 towers across the whole country. EDIT: Oh, er... not?
×
×
  • Create New...