Jump to content

Txmtx

S4GRU Member
  • Posts

    69
  • Joined

  • Last visited

Profile Information

  • Phones/Devices
    iPh 5,1 6.1.2 w/Evasion JB
  • Gender
    Not Telling
  • Location
    Texas
  • Here for...
    Sprint Fan Boy (or Girl)

Txmtx's Achievements

Member Level: iDEN *chirp*

Member Level: iDEN *chirp* (5/12)

18

Reputation

  1. True story. Specifically, I wish it would be unified and enforced like it is in it uTorrent, which displays any combination of bps, Bps, K, Ki, M, and Mi that you like, if you ask, but converts and displays them all correctly. And therefore that anytime the "K" prefix is used, regardless of setting, it means 1000; and if you really want to display multiples of 1024, use KiB. Pretty sure the public will quickly either get the drift or not notice at all. This is akin to the Y2K bug to me... really shitty, shortsighted planning, a massive failure by those who arguably should know math quite well, that every multiple increases the magnitude of the error proportionally. Thus, if you're the NSA (and presumably you are familiar with this math, but why are we doing it at all?), and you order up a zettabyte of disk space, you'd be required to realize that is actually 867 EiB, and some things will indeed report that as 1 ZB, some as 867 EiB, and sadly, sometimes as 867 EB. That last one is the confounding bit, since it itself should mean 752 EiB. How is one to track all the apps and OSes that mess this up? With the latter, a report of 1 ZB (when it is wrong and mislabels KiB etc as KB) would mean 1.18 ZB in reality! Utter nonsense. Thus my above suspicion that the Speedtest app is probably converting wrong somewhere. But an Ookla dev is more than welcome to clear it up for me.
  2. Plus 37.5M could actually be 38,400K or even 39,320K in a lot of applications that don't unambiguously state or account for the difference between when they mean Mib and Kib and Mb and Kb... This is related to the common rage on how a "1TB" HDD is like "931GB" on some OSes.
  3. In my earliest writing I certainly agree with you here. The whole "all things being equal..." part. If it is truly a seam, and you have visible coverage from multiple cells, LTE wins. If the seams are such that for LTE it is an edge while for CDMA it was a seam with choices, maybe not. But the network is being designed with all that in mind. The paper will probably help illuminate some of the features and tricks designed-in to help. But in any case we will not see VoLTE supplant 1x voice until they have managed the seam regions properly with LTE in mind, whether that means waiting for HetNet or whatever. Frankly, LTE already far outperforms EvDO for data even in any 1:1 overlay... Any Ev link at seams/edges has long had its breath sucked away and is unusably saturated at present, despite a seemingly-usable RSSI value. This will not change without drastic offloading to LTE. Then perhaps at seams data will revert to 3G for a few hundred square meters, inside buildings of that region. The key to reconciling my statements with yours (beyond the white paper) is basically: For a 1:1 overlay of identical bands and current UE, *If and only if* the cell edge is also a seam (meaning multiple edges are visible to the UE) LTE outperforms CDMA at a seam -- even a "5-bar" CDMA seam. If, however, for CDMA it is a seam from the UE's perspective but for LTE it is effectively an edge, it is likely you will run into dead zones for LTE where you wouldn't for CDMA. For relatively-unloaded 1x, this means you can place calls where LTE would fade into nothingness. Luckily the network engineers understand all this and will use the full bag of engineer trickery to prevent it... I will be quite surprised if (after perhaps a few weeks of growing pains) VoLTE is not equally or more resilient in respect to dropped/failed calls, with equal or better call quality, and with all the other advantages of LTE. They won't launch it until then.
  4. Haha. I know! That's why I chose avocados as the third thing to apples and oranges in our fruity analogy. They're lopsided but more nutritive. Good point. Read the white paper in the section on handoff to non-LTE for why. That has been quite troublesome in the interim for rollout. The NV 3G sites are setup for this but legacy is not capable. (If you've ever had, say, a Skype video call successfully continue after dropping from LTE to 3G, and another time had it not: this is why. It should be possible with minimal failures with all-NV. But until then, it won't always work!)
  5. I realize you were already writing that so, enjoy that white paper I just posted while I go get some coffee On a fully-deployed network built on even Rel 8 and only PCS, VoLTE would work better than even the equivalent on ever-faithful 1x, where the UE is responsible for the bulk of the handoff. In LTE rel 8, an X2 priority-2 (voice call would be pri-2, signaling is 1) the UE doesn't really have to do much of anything. The X2 handoff with multiple-candidate eNodeBs is significantly more resilient than a CDMA soft handoff.
  6. http://www3.alcatel-lucent.com/wps/DocumentStreamerServlet?LMSG_CABINET=Docs_and_Resource_Ctr&LMSG_CONTENT_FILE=White_Papers/CPG0599090904_LTE_Network_Architecture_EN_StraWhitePaper.pdf That should prove invaluable for understanding my statements, and is cool anyway. This is all Rel 8 stuff. Pay particular attention to the sections on S1 and the glorious X2 (lossless handover with multiple candidate eNodeBs and initiated by load and QoS management completely transparent to the UE sounds pretty "soft" to me...) for why I say cell seams are inherently better on LTE. Indeed, with Rel 11 and CoMP + HetNet (and then Rel 12 to help issues with rapid X2 handoffs in the case of fast-moving UE) basically everywhere will be a cell seam and it could be said that it is considerably more desirable to be in moderate range of multiple eNodeBs than it is to be sitting directly beneath a tower itself. But, even in Rel 8, you bounce between eNodeBs via X2 probably a lot more often than you might realize.
  7. Yes, a large amount of what I am referring to regarding advantages of LTE at cell seams is based on yet-undeployed Release 11 CoMP. Without CoMP, soft handoffs are not practical, but OFDMA / SC-FDMA as deployed in Rev 8 and 9 already offers significant benefits over CDMA at cell seams, namely being impervious to "breathing" which is what dicks over most UE on loaded CDMA cells. Soft handoffs aren't important in LTE, at least until CoMP HetNet for fast-moving UE.
  8. ^btw: sorry for the text wall, as it appears to me. Tapatalk is garbage. Edited repeatedly, and it still rapes the formatting regardless.
  9. Heh. I hope I phrased it vaguely enough for it to remain accurate under scrutiny. My statements omit a lot of "if and only if" specifics. Of course, the designers are attempting to exploit the strengths wherever possible during the sparse rollout. There are a lot of "just barely" overlapping seams, where a building and/or just holding the handset at the wrong angle can reduce the signal enough that when the channel fades due to predicable atmospherics, it will often enough drop entirely down to EVDO or 1x. Indeed :)To elaborate, the primary reason is the phenomenon of "cell breathing", and most of us far preferentially use up spectrum with data. 1x and EVDO are both subject to CDMA cell breathing detrimental effects, but 1x isn't remotely saturated like EVDO is on the airlink (typically). Thus, WYSIWYG regarding dBm for 1x, most times -- your SNR is similar to what the tower sees from you in terms of usable signal. EVDO is/was (thanks to LTE offloading) highly saturated. For any CDMA, saturation causes the perplexing "full bars" (your handset sees the tower clearly) yet no performance (for the tower, you are lost in the noise). It's the CDMA version of unintentional DDOS. This can be demonstrated with streaming downlink connections set up specifically to require no confirmations (or to consider any confirmation as valid). You'll pull in the data, with the BER you'd expect for your RSSI. But for real-world connections -- browsing, YouTube, whatever -- the protocols require timely-enough responses of "yep, got it, send more" or "got it but repeat this one section" even for primarily downlink-type activity. This frequently fails to be received, and is therefore repeated until successful. This is the cause of the all-too-familiar handset-but-really-pocketwarmer which dies by midday despite only attempting to check email once. This is one reason proxy + web compression tech like legacy Opera mobile and the like work so well on overloaded CDMAnetworks -- the advantage goes far beyond just the compression. Packing the whole set of resources into a couple large chunks helps address that weakness in the airlink. tl;dr, you don't typically experience issues with 1x because it is not saturated. But it must exist, they can't really slice it and give spectrum to Ev. Some of this is aided by software in NV -- if you're in an NV-upgraded area and find yourself dropping LTE for 1x or sitting on 1x for data vice 3G more than before, that is frequently why. It attempts to calculate for cell breathing and kick you to 1x -- thus you'll see "weird" behavior like -96dBm 3G, and then look again and notice the "dreaded" circle of 1x. What gives? That was basically full bars for Ev, right?? In fact, it is very much like a DDOS. The available spectrum has had it's "breath sucked out" but it isn't so much because the underlying bandwidth is gone, but rather that the cell simply cannot handle the raw amount of UE doing even basic things like checking for push notifications. (Doesn't hurt Sprint's voice codecs are gentler than, say, big red's... but god I can't wait for "HD-voice" fidelity.) Yeppp. As I described lengthily above. I'll call 'em avocados and oranges. Oranges (1x CDMA) roll farther, but if you're in a place where you're getting avocados (LTE) from two different trees, that'll sustain you a lot better than oranges from two trees. (Invent some analogy here for why you can only eat oranges from one tree at a time while avocados can be devoured as received.) Then there's apples (Ev CDMA). Fuckin' everyone picks those trees bare. In between trees is mathematically within rolling distance but those things are nabbed almost before they hit the ground, so you better camp under the tree if you are hungry.This is also why Sprint takes LTE sites live as they are installed, but CDMA NV must get realigned in clusters. Generally, another LTE cell either does nothing for you, or helps you. CDMA cells require much more careful optimization with downtilt and height and blah blah blah, and therefore must be calibrated in clusters (and then tweaked for a while too, and again and again as pop. density and UE density and etc. fluctuates...).
  10. I'm seeing this perception a lot lately so I want to make a broad, generic clarifying statement about the LTE airlink. You didn't really say anything incorrect but I am assuming it comes from the same, slightly incomplete picture. All other things being equal, LTE is more fragile than CDMA, true enough. But it has been intended from day 1 to limit its exposure to those situations and exploit the ways it is superior. Same amount of devices, same freq, same physical locations of the users and tower, same devices: EVDO outperforms LTE. But LTE's strength is exploiting multiple towers/cells as references, like a rake receiver wired to other rakes. If you're on truly only one tower, EVDO wins. If you're seeing more than 1, even if each one individually is pretty weak, a nigh-unusable condition on EVDO becomes 1+ Mb down and .1 up, or better, the more cells you see. So, true cell-edge performance is worse: this is the misinterpreted statement I see spreading in the forum these days. But typically, for LTE, a cell edge is actually a cell seam, and that us a beneficial situation, vice CDMA where it must simply pick one. You get greatly increased performance on LTE cell "edges" compared with CDMA, provided that edge has more than just one reference tower.
  11. Like I said. I only mentioned it cuz I'm feeling verbose and kind enough to explain the potential for "problematic perception." If you really used way too much, Sprint would shitcan your contract. Only 3 blocks, ouch, hope that gets sorted out, that's depressing!
  12. You'll find it's better to be careful to not imply you "abuse" (by the community subconsciously-agreed definition) Sprint's unlimited data. Most of us love the underdog (Sprint) and get a little testy about it. I'm sure you are using it for casual youtubing and email and browsing. Perfectly within Sprint planner's calculations for usage. It will always be some that use more than average and some less... Using exactly the average is just that. Average. I use quite a lot. I can excuse that as being a paid network consultant but it doesn't negate that I pull down 50GB+ and probably another full GB of Verizon roaming 3G. Anyway, glad to hear 3G is sufficient for your home uses. That is encouraging from a planning perspective. Where are you located?
  13. Indeed. And understandably. Even more, I am curious to see how the even tech-savvy public, unaware of the specifics of the NV software-defined load leveling, react in the occasions where their LTE-capable device drops to 3G, despite 3, 4, even 5 bars. (On phones where those bars are truly reflective of LTE strength and not underlying 1x.) More and more phones are hiding the status bar by default though, which will help. You won't notice unless you specifically check, presumably due to shit performance, which means that NV function didn't predict successfully anyway. The idea is it should be transparent to the user unless on a streaming unbuffered connection (VoIP). This aspect will only function at its full potential with triband devices and fully equipped towers, but on paper it looks . I am optimistic. I love watching the ingenious ways Sprint has dealt with its strategically poor position through engineering. And now they have the smarts and money too...
  14. I can think of only a couple places like this where I know for certain that while data has improved in most of the sector, there are areas of difficulty, highly noticeable on a steady connection like 3G VoIP or regular voice calls. Both of these things together imply to me that they still have work to do on tweaking the cell seams. Unanticipated multipath effects, maybe a higher noise floor than they expected. I'm just an RF engineer so my experience with actual network deployment is largely theoretical, and other forum guys (AJ...) probably can speak more confidently but that is my experience and opinion. Tl;dr, the 3G problems will subside. The LTE fading and bumping you to 3G or 1x *may* not subside anytime soon if you don't have an 800-LTE capable phone and Sprint deploys it for seam / cell edge relief. Your 3G trouble indicates a higher likelihood you are in a seam region. Good news is, when you do pick it up, the orthogonality of LTE spec cell *seams* perform much better than EVDO seams, especially on the downlink. (Haven't heard it discussed much other than the generic "LTE is a poor-performing spec at cell edge" ... which is true, but true cell edges are very different converging cell edges aka seams, where LTE largely outperforms EVDO.)
  15. That is pretty impressive. Where are you? I'm in a pretty loaded area -- upper middle class suburbia in Houston, so lots of LTE devices but also lots of kids on EVDO devices -- parents giving them the "discount" iPhone, 4Ses abound. LTE is quick and impressive. NV made a huge difference, as you'd imagine... 3G is usable. Off-peak it is back to what I remember from the rollout of EVDO Rev A back in the day. 1.2M+ down, 400k up, ~120ms. On-peak is still better than comparable "budget" plans (non-LTE AT&T "aio" & TMo devices). Very usable most times. If it isn't, it's usually because you've ventured into a non-NV-complete area. Such as where I am now. For giggles, I got (after multiple timeouts and errors) "0.00" down and .25 up, 1280 ms. I have turned back on my wifi of course...
×
×
  • Create New...