AT&T Slashes Intro Rate for DSL Service

Firing what could become the opening salvo in a price war for high-speed Internet access, AT&T has launched an online-only offer that provides high-speed home Web access for US$12.99 per month.

The price is available only to first-time broadband users who are also telephone subscribers. After a year, the price will revert to market rates of about $29 per month.

Converting Dial-Up Users

Still, the price point is seen as significant because it is in the same range as many dial-up access plans. AT&T is using the bargain rate to convince dial-up holdouts to make the switch and likely believes that once they do so, few if any will go back, even if it means a few extra dollars per month.

For $12.99, users get a co-branded AT&T and Yahoo Internet service with speeds of up to 1.5 Mbps. The company, which launched a massive re-branding effort after the recent merger of SBC Communications and the original AT&T, is also offering a 3 Mbps version, known as AT&T Yahoo High Speed Internet Pro, for $17.99 per month.

The prices represent “an incredible value that will help even more consumers experience and enjoy the digital lifestyle,” said Scott Helbing, chief marketing officer for AT&T Consumer.

AT&T, which has about 7 million DSL lines in service, said the promotion was available in the 13 states where SBC conducted business, including southern states such as Texas and Arkansas; Illinois, Indiana and Michigan in the Midwest; and California and Nevada in the West. SBC’s only major inroads on the Eastern seaboard is in the parts of Connecticut that it services.

Pricing Power Flexed

Given the competitive nature of the high-speed Internet business, with cable companies and telecommunications firms vying for the same pool of customers, others may follow suit with lower-cost options. The deal could also spell more imminent doom for businesses that rely on dial-up subscribers, whose numbers have been plunging in recent years.

Cable companies typically charge more for high-speed service, though most cable modem access plans have higher download speeds. Still, many of those companies may be forced to react in order to avoid losing ground in the race for more business, as increasing numbers of customers choose bundles of services that include Internet access, telephone service and television services.

SBC can also offer wireless telephone services, since it owns a piece of AT&T Wireless, and portable Internet access, through a nationwide AT&T network of hotspots.

Yet some see danger for AT&T in pricing its service too low and starting a race to the bottom of the market, with quality of service, speed and other features left by the wayside. Also, attempting to compete on price alone leaves a company vulnerable to competitors who are willing to do the same.

Broadband penetration continues to climb, though analysts have said the double-digit growth of recent years will slow as the numbers grow. According to Nielsen//NetRatings, more than 42 percent of Americans now have broadband access at home, with some 60 percent of the Web site visits in the U.S. before the holiday season of 2005 coming from broadband connections.

Providers have helped sustain the growth of broadband by continually holding the line on price, Nielsen//NetRatings Vice President Charles Buchwalter told the E-Commerce Times.

“The speculation was always that the high cost of broadband would limit widespread adoption,” Buchwalter said. “But carriers have responded to the growing demand for lower cost broadband, and all indications are that this trend will continue.”

Building an Audience

That growth will be essential to create a market for the huge menu of high-speed services that providers plan to deliver over high-speed connections, especially on-demand movies and television shows, but also Voice over Internet Protocol (VoIP) and even interactive TV services such as e-commerce.

Telecom analyst Jeff Kagan told the E-Commerce Times that providers are eager to avoid an all-out price war. That’s why many partner with Web service providers such as Yahoo, which has had a long-standing relationship with SBC, AT&T’s predecessor.

“Carriers would much rather differentiate themselves with service and features rather than price,” Kagan said. However, if one company has success with a bargain rate service, others might have to follow. “No one wants to fall behind on building that loyal customer base for all the services that are going to come over that line in the future.”

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

CRM Buyer Channels

OSS News: Enterprise Linux, Microsoft Replacements, Fuzzy Linux Solutions

Open-source innovations and the Linux operating system continue to set complacency aside.

The Linux Foundation is ever-expanding its influence as it brings more developer projects under its banner, leaving no room for proprietary software to rule the computing world.

LF is the world’s leading home for collaboration on open-source software, open standards, open data, and open hardware. Learn about the importance of NextArch and OpenBytes, two of LF’s newest creations.

Red Hat Linux pushed forward in recent weeks with two updates to further innovate enterprise computing. What difference can numbers make? Check out the significance of Red Hat’s recent release of RHEL version 8.5 versus the also just-released RHEL 9 Beta.

Also check out in this column why new fuzzing developments are good for your cybersecurity. Tired of going online to access the Microsoft Office suite on your Linux computer? Learn about two new alternative office suites you can run with full compatibility from your Linux hard drive.

LibreOffice Update Closes Out the Series

The Document Foundation on Nov. 4 announced the release and general availability of LibreOffice 7.1.7 as the last point release in the LibreOffice 7.1 office suite series.

The LibreOffice 7.1 office suite was released in February. It is supported until the end of November, after which the LibreOffice 7.1 series reaches the end of life. No new maintenance updates will be published.

LibreOffice 7.1.7 is a minor update to address 27 bugs across the office suite’s various core components. You can see details about them in the RC1 and RC2 changelogs.

This renders your installation vulnerable and outdated. No new maintenance releases for the 7.1 series will be issued. It is being replaced with LibreOffice 7.2, which is supported until June 12th, 2022. You can download it here. Or you can wait for it to be available in the various Linux distribution repositories.

LibreOffice 7.2 brings many new features and improvements, as well as better support for proprietary formats created with the MS Office suite. The latest point release is LibreOffice 7.2.2, but version 7.2.3 is expected to arrive by the end of the month.

New SoftMaker FreeOffice 2021 Now Available

If you are not a fan of the LibreOffice suite for Linux, SoftMaker’s FreeOffice suite for Linux may be more to your liking. FreeOffice is a free full-fledged alternative to Microsoft Office. Its latest completely revised version became available last month.

FreeOffice is seamlessly compatible with Microsoft Office with support for modern and classic Microsoft formats. It comes with support for the SVG graphic format, new functions, and improved import and export functions. An added benefit for Linux users who also run Windows or macOS computers, FreeOffice 2021 now also offers the option to use a license simultaneously with Windows and macOS.

The suite includes the word processing software TextMaker, the spreadsheet software PlanMaker, and the presentation software called Presentations. All three programs contain numerous innovations and improvements that make work even more efficient than previous releases. FreeOffice 2021 can be used with either modern ribbons or classic menus and toolbars.

The FreeOffice suite is the free offshoot of the commercial package SoftMaker Office. FreeOffice 2021 can be downloaded free of charge at freeoffice.com.

Linux Foundation Tackles Diverse Computing Environments

The Linux Foundation announced earlier this month at its Membership Summit the creation of the NextArch Foundation. The new Foundation is a neutral home for open-source developers and contributors to build a next-generation architecture that can support compatibility between an increasing array of microservices.

Cloud-native computing, artificial intelligence (AI), the internet of things (IoT), edge computing, and much more have led businesses down a path of massive opportunity and transformation. But a lack of intelligent, centralized architecture prevents enterprises and developers from fully realizing their promise.

The NextArch Foundation will address that glaring gap. Developers today face seemingly impossible decisions among different technical infrastructures and the proper tool for a variety of problems, said Jim Zemlin, executive director of the Linux Foundation.

“Every tool brings learning costs and complexities that developers do not have the time to navigate. Yet there is the expectation that they keep up with accelerated development and innovation,” he explained. “NextArch Foundation will improve ease of use and reduce the cost for developers to drive the evolution of next-generation technology architectures.

NextArch will leverage infrastructure through architecture and design to automate development, operations, and project processes to increase the autonomy of development teams. Enterprises will gain easy-to-use and cost-effective tools to solve the problems of productization and commercialization in their digital transformation journey.

“This is an important effort with a big mission, and it can only be done in the open-source community. We are happy to support this community and help build open governance practices that benefit developers throughout its ecosystem,” said Mike Dolan, senior vice president and general manager of projects at Linux Foundation.

Project OpenBytes Makes Open Data More Accessible

The Linux Foundation earlier this month announced the new OpenBytes project, spearheaded by Graviti, to make open data more available and accessible through the creation of data standards and formats.

Scores of AI projects have been held up for a long time by a general lack of high-quality data from real use cases, according to Edward Cui, Graviti’s founder. His company wants to change that situation.

“Acquiring higher quality data is paramount if AI development is to progress. To accomplish that, an open data community built on collaboration and innovation is urgently needed. Graviti believes it is our social responsibility to play our part,” he said.

A standard format of data published, shared, and exchanged on its open platform, will help data contributors and consumers easily find the relevant data they need and make collaboration easier, Cui explained.

Large tech companies already realize the potential of open data. But no well-established open data community with neutral and transparent governance across various organizations in a collaborative effort exists, according to LF.

“The future of software is being eaten by open source, as well as data-sharing. OpenByte’s announcement is a great signal for all developers on the accessibility of datasets. We are very excited to see standardized datasets available to a broader community, which will massively benefit AI engineers,” said Bing He, co-founder and COO at Jina AI.

Better Cyber Fuzzing for Software Devs

Continuous fuzzing over the years has become an essential part of the software development lifecycle. By feeding unexpected or random data into a program, fuzzing catches bugs that would otherwise slip through the most thorough manual checks and provides coverage that would take staggering human effort to replicate.

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) recently issued new guidelines for software verification in response to the White House Executive Order on Improving the Nation’s Cybersecurity. That action specifies fuzzing among the minimum standard requirements for code verification.

Google on Nov. 11 announced the release of ClusterFuzzLite, an open-source continuous fuzzing solution that runs as part of continuous integration (CI)/continuous deployment (CD) workflows to find vulnerabilities faster than ever before.

With just a few lines of code, GitHub users can integrate ClusterFuzzLite into their workflow and fuzz pull requests to catch bugs before they are committed, enhancing the overall security of the software supply chain.

Since its release in 2016, over 500 critical open-source projects have been integrated into Google’s OSS-Fuzz program, resulting in over 6,500 vulnerabilities and 21,000 functional bugs being fixed. ClusterFuzzLite goes hand-in-hand with OSS-Fuzz, by catching regression bugs much earlier in the development process, according to Google.

Fuzzing is an extremely effective way to catch bugs that would otherwise be overlooked. The slow process is not always integrated into development processes, according to Jonathan Metzman, software engineer on the Google Open Source Security Team. ClusterFuzzLite offers a continuous solution that is an integrated part of the CI/CD workflows, making finding bugs much easier and faster.

“ClusterFuzzLite complements ClusterFuzz by being easy to set up and work with both open source and closed source projects. For example, a GitHub user can use ClusterFuzzLite on GitHub actions with just one simple configuration file,” he told LinuxInsider.

“It also fuzzes pull requests and commits, allowing it to catch bugs before they land. ClusterFuzz does not offer this feature, and therefore ClusterFuzzLite is complementary for users who are already using ClusterFuzz,” he said.

RHEL 8.5 Releases Addresses Deployment Complexity

Red Hat on Nov. 10 announced the general availability of Red Hat Enterprise Linux 8.5. REHL is an enterprise Linux platform. It offers a common open operating system that extends across clouds, traditional data center operations, and out to the edge.

Version 8.5 provides new capabilities to meet evolving and complex IT needs, from enhanced cloud-native container innovations to extending Linux skills with system roles, on whatever footprint our customers require.

RHEL 8.5 is designed to offer a refined platform for the hybrid world. It addresses the needs of traditional data centers as well as complex multi-cloud and edge computing deployments to enhance digital transformation.

Red Hat Enterprise Linux 8.5 is now generally available via the Red Hat Customer Portal.

RHEL 8.5 is designed to offer solutions as a backbone to public cloud providers, multiple hardware architectures, virtualized environments, and edge computing models, according to Red Hat. This can be a solution to organizations that find using public cloud exclusively may not be economically feasible for long-term scale.

Wait, REHL Has a New Beta

If you do not have mission-critical work and want more cutting edge, you can skip the latest stable release of RHEL 8.5 in favor of the REHL 9 Beta released on Nov. 3 instead.

Red Hat Enterprise Linux (RHEL) 9 Beta is now available with new features and many more improvements. The newest beta release is based on upstream kernel version 5.14 and provides a preview of the next major update of RHEL. Like version 8.5, this release is designed for demanding hybrid multi-cloud deployments that range from physical, on-premises, public cloud to edge.

Unlike previous major releases of RHEL, the version 9 Beta release has fewer changes that require admins and IT Ops to figure out. Go here for the full picture.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics. Email Jack.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories
More by Jack M. Germain
More in Software

‘Shadow Code’ Creates Risk for 99% of Websites

Shadow code — third-party scripts and libraries often added to web applications without security validation — pose risks to websites and jeopardize compliance with privacy regulations, according to new research released Tuesday.

Third-party code leaves organizations vulnerable to digital skimming and Magecart attacks, the researchers also noted.

The study, conducted by Osterman Research for PerimeterX, found that more than 50 percent of the security professionals and developers surveyed believed there were some or lots of risk in using third-party code in their applications.

Surveyors also found increased concern among respondents about cyberattacks on their websites. Last year, 45 percent of those surveyed had significant concern about their internet outposts being targeted by hackers; this year that number jumped to 61 percent.

Concern over supply chain attacks also increased, from 28 percent in 2020 to 50 percent in 2021. Anxiety over Magecart attacks jumped significantly from last year, too, by 47 percent. Magecart, or electronic skimming, is a form of fraud where transaction data is intercepted during the checkout of an online store.

Balancing Risk and Efficiency

Developers use third-party code for a number of reasons.

“It’s readily available,” said Brian Uffelman, vice president of product marketing at PerimeterX, a web security service provider in San Mateo, Calif.

“There’s an incorrect assumption that if it’s out there and open source, it’s secure,” he told TechNewsWorld.

“They’re trusting that the open source code that they’re using, or the libraries that they’re using, are secure,” he continued. “What we found is that is not the case.”

“Oftentimes, they’re trying to balance efficiency with risk,” he added.

Jonathan Tanner, a senior security researcher at Barracuda Networks, a security and storage solutions provider based in Campbell, Calif., explained that libraries play an important role in developing applications, since they provide functionality that would take a lot of time to develop, and in many cases would be more prone to potential bugs and exploits if developed internally.

“There’s a common adage of not reinventing the wheel when it comes to development, which not only saves development time but also allows for a higher level of complexity in the applications as a result,” he told TechNewsWorld.

Courting Trouble

Tanner added that in some cases third-party libraries can even be more secure than code written by internal development teams, even if vulnerabilities are discovered in the most reputable ones.

“If even the most reputable library potentially maintained by hundreds of experts in the specifics of what the library does can have vulnerabilities, trying to build and maintain the same functionality internally with a small team of developers who likely are not experts on the functionality could potentially be disastrous,” he observed.

“There is certainly a lot of value in utilizing pre-existing libraries as a result, not only from a time-saving perspective but also from a security perspective,” he said.

Development teams want to get products out the door as quickly as possible, observed Sandy Carielli, a principal analyst with Forrester Research.

“A lot of third-party and open-source components will allow them to add basic functionality and focus on some of the more sophisticated differentiating aspects of the product,” she told TechNewsWorld.

“The challenge is that if you don’t know what those third-party components are that are called in, you can find yourself in a heap of trouble,” she said.

“If modern businesses want features and functionality delivered fast and cheap, it’s inevitably going to come at the cost of not being able to do something — or a lot of things — the right way,” added Caitlin Johanson, director of the Application Security Center of Excellence at Coalfire, a provider of cybersecurity advisory services in Westminster, Colo.

“We would be naive to think that the speed at which new apps and features get delivered to our technology-reliant world is achieved without corners getting cut,” she told TechNewsWorld.

Risky Business

There are countless risks that shadow code can pose to organizations, maintained Taylor Gulley, a senior application security consultant with nVisium, a Falls Church, Va.-based application security provider.

“One is being the potential for a full compromise of the application and the data within that application,” he told TechNewsWorld.

“In addition to technical risks,” he continued, “the reputational risks could be catastrophic if a vulnerability is introduced to your application as a result of an unvetted, third-party library.”

When an organization lacks visibility into the open-source code it’s using, licensing risks can also emerge.

“An open-source component might have a restrictive license,” Forrester’s Carielli explained.

“Suddenly, you’ve added a component to your code that requires you to open-source the entire application,” she continued. “Now your organization is at risk because all your proprietary code has to be open sourced.”

Widely Used

The Osterman researchers also found that the use of third-party code is widespread throughout the internet. Nearly all the respondents to their survey (99 percent) reported their websites used at least one third-party script.

Even more revealing was the finding that 80 percent of those surveyed said that third-party scripts made up 50 to 70 percent of a their websites.

“While there haven’t been many formal studies on the prevalence of shadow code, we can assume that it is highly prevalent due to the widespread use of JavaScript in most websites, and the sheer number of JavaScript libraries available,” observed Kevin Dunne, president of Pathlock, a unified access orchestration provider in Flemington, N.J.

“There are over a million known JavaScript open source projects on GitHub, which presents an insurmountable challenge for security teams to review and assess manually,” he told TechNewsWorld.

He added that if the shadow code allows a third party to unknowingly view data on an organization’s site, it likely put the organization at risk of maintaining GDPR or CCPA compliance, because an unknown data processor is viewing data without a public disclosure.

“This can result in millions of dollars of potential fines for an organization that is required to maintain this type of data privacy compliance,” he explained.

Shadow code is definitely an increasing problem and a problem that a lot of people don’t realize, added Christian Simko, director of product marketing at GrammaTech, a provider of application security testing solutions headquartered in Bethesda, Md.

“Custom code is shrinking and third-party code usage is growing,” he told TechNewsWorld. “If you’re not properly managing the code base that you’re using, you could be inserting vulnerabilities into your software without knowing it.”

John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories
More by John P. Mello Jr.
More in Cybersecurity