Best laptops for programming | PCWorld

Read More

What’s Next for the Fast-Growing Programming Language? – The New Stack

The programming language Rust has been growing in popularity over the last couple of years. In its latest developer industry report, analyst firm SlashData stated that Rust has “nearly tripled in size in the past 24 months, from just 0.6M developers in Q1 2020 to 2.2M in Q1 2022.”

The Rust Foundation recently announced its Community Grants Program 2022, which has a budget of $625,000. The plan is to give selected Rust maintainers a grant of $12,000 each. In an AMA (Ask Me Anything) video last month, Rust Foundation Executive Director Rebecca Rumbul said that the grants won’t just be for current maintainers, but to encourage new people to join the project too. “We want to reward people who are already here and who are already doing good work,” she said, “but we want to ensure that Rust is sustainable and that requires a pipeline of people coming through, being able to learn.”

To find out more about Rust’s growth — and why it is increasing being preferred over traditional programming languages ​​like C and C++ — I conducted an email interview with Rumbul.

TNS: SlashData says that Rust is “the fastest growing language community”. What’s driving this rapid adoption? Is it coming at the expense of older programming languages, like C/C++?

Rust Foundation Executive Director Rebecca Rumbul

Rumbul: I think there are a number of factors in the growth — the language itself is interesting, challenging and satisfying to build in. The security and memory safety enables people to create with a lot of confidence. The maintainer and contributor community are inclusive and supportive, and Rust is also a great choice for developers looking to enhance their professional prospects, as demand for Rust developers continues to increase.

I’m not sure yet that this growth is at the expense of other languages ​​— we find that Rust users are typically people who are already very familiar with languages ​​such as C++.

I was interested in this comment in the SlashData report: “it is mostly used in IoT software projects but also in AR/VR development, most commonly for implementing the low-level core logic of AR/VR applications.” I’ve been fascinated by the rise of 3D web apps (aka metaverse) this year — why is Rust better than other options for the core logic in these kinds of apps?

A complete AR/VR application can be written in multiple languages. For example, you may use C# and Unity to implement the graphics. Rust is a great option for the underlying core logic of the application because of its safety profile (eg, catching bugs before runtime and memory safety), availability of libraries (crates), and its ability to create efficient binaries, which may be important for the clients where you want to deploy the application.

in an open source software security plan presented to the White House last week by The Linux Foundation, it states that “memory safe languages ​​such as Rust, Go, and Java” are increasing preferred over the likes of C and C++.

Read More

Inside D.C. Police’s Sprawling Network of Surveillance

It was the early days of the Black Lives Matter movement. Protesters gathered in Washington, D.C., in the fall of 2014, awaiting word on whether a grand jury would indict Ferguson, Missouri, police officer Darren Wilson for shooting and killing Michael Brown.

Unbeknownst to the demonstrators, the police were also waiting — and watching. Stowed away in a secure room known as the Joint Operations Command Center, officers and analysts from the D.C. Metropolitan Police Department kept eyes on the news, activists’ social media accounts, and closed-circuit television feeds from across the district, according to internal MPD emails. The police were ready to funnel intelligence to officers on the ground, who were instructed to provide updates on protest activity back to the JOCC every half-hour.

Five months later, the MPD “activated” the JOCC again to monitor demonstrations against the Baltimore police’s killing of Freddie Gray, the emails show. In the lead-up to the protests, MPD analysts scoured social media for demonstration times and locations, as well as any possible indications of violence or civil disobedience, while officers on the ground sent photos of the gatherings. Then when marches started, the officers provided constant updates on where protesters were moving as the JOCC continued to gather intelligence, including on how demonstrators were monitoring the police presence and whether they suspected that there were plainclothes cops among them. (The JOCC had a practice of communicating with undercover officers, including to monitor protests.)

The MPD designed the JOCC as a surveillance control center. It contains more than 20 display monitors linked to around 50 computer stations, all connected to the MPD’s broad arsenal of intelligence data programs and surveillance sources. Launched in a rush on September 11, 2001, it was the MPD’s first “war on terror”-era infrastructure upgrade. Since then, the command center has served as a template for area police’s massively expanded domestic surveillance apparatus.

As a jurisdictional oddity and the site of the country’s most powerful institutions, D.C. contains more law enforcement officers — coming from local, regional, and federal agencies — per capita than any other major city in the U.S. The highly coordinated agencies have together built a complex network of partnerships, initiatives, and technology to surveil the district. The JOCC, for example, is accessible to the FBI, the Department of Homeland Security, and regional police intelligence hubs, in addition to the MPD.

For years, this sprawling web of surveillance has been shrouded in secrecy. Now, more than two decades into the frenzy of police monitoring of ordinary citizens, recently uncovered documents are revealing its scope and practices. (The MPD did not respond to The Intercept’s emailed questions.)

“There’s a real potential for this kind of surveillance to cause a chilling effect and a climate of fear around the right to protest in the city.”

Last year, the transparency collective known as Distributed Denial of Secrets published 250 gigabytes of MPD emails and attachments, stolen as part of a hack by the ransomware group known as Babuk and made

Read More

A programming language for hardware accelerators | MIT News

Moore’s Law needs a hug. The days of stuffing transistors on little silicon computer chips are numbered, and their life rafts — hardware accelerators — come with a price.

When programming an accelerator — a process where applications offload certain tasks to system hardware especially to accelerate that task — you have to build a whole new software support. Hardware accelerators can run certain tasks orders of magnitude faster than CPUs, but they cannot be used out of the box. Software needs to efficiently use accelerators’ instructions to make it compatible with the entire application system. This translates to a lot of engineering work that then would have to be maintained for a new chip that you’re compiling code to, with any programming language.

Now, scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) created a new programming language called “Exo” for writing high-performance code on hardware accelerators. Exo helps low-level performance engineers transform very simple programs that specify what they want to compute, into very complex programs that do the same thing as the specification, but much, much faster by using these special accelerator chips. Engineers, for example, can use Exo to turn a simple matrix multiplication into a more complex program, which runs orders of magnitude faster by using these special accelerators.

Unlike other programming languages ​​and compilers, Exo is built around a concept called “Exocompilation.” “Traditionally, a lot of research has focused on automating the optimization process for the specific hardware,” says Yuka Ikarashi, a PhD student in electrical engineering and computer science and CSAIL affiliate who is a lead author on a new paper about Exo. “This is great for most programmers, but for performance engineers, the compiler gets in the way as often as it helps. Because the compiler’s optimizations are automatic, there’s no good way to fix it when it does the wrong thing and gives you 45 percent efficiency instead of 90 percent.”

With Exocompilation, the performance engineer is back in the driver’s seat. Responsibility for choosing which optimizations to apply, when, and in what order is externalized from the compiler, back to the performance engineer. This way, they don’t have to waste time fighting the compiler on the one hand, or doing everything manually on the other. At the same time, Exo takes responsibility to ensure that all of these optimizations are correct. As a result, the performance engineer can spend their time improving performance, rather than debugging the complex, optimized code.

“Exo language is a compiler that’s parameterized over the hardware it targets; the same compiler can adapt to many different hardware accelerators,” says Adrian Sampson, assistant professor in the Department of Computer Science at Cornell University. “ Instead of writing a bunch of messy C++ code to compile for a new accelerator, Exo gives you an abstract, uniform way to write down the ‘shape’ of the hardware you want to target. Then you can reuse the existing Exo compiler to adapt to that new description instead of writing something

Read More

Biden’s Middle East defense network against Iran is working – Israel News

News

Life and Culture

Columnists and Opinions

Haaretz Hebrew and TheMarker

Partnerships

Haaretz.com, the online English edition of Haaretz Newspaper in Israel, gives you breaking news, analyzes and opinions about Israel, the Middle East and the Jewish World.
© Haaretz Daily Newspaper Ltd. All Rights Reserved

Read More