#computerscience

waynerad@diasp.org

"Neurallambda".

"The Problem: My premise all comes down to 'reasoning', and the lack thereof, in current AI models. I'll provide my working definition of 'reasoning', but for a moment, please bear with a couple examples of reasoning failures."

"Transformer models cannot reason."

"Diffusion models cannot reason."

"AI is currently like a living textbook."

"What is Reasoning? Reasoning is the ability to know true things without having learned them. It is building knowledge/predictions/retrodictions/actions atop principles, instead of evidence."

"What are programs?" "Turing Machines are machines capable of executing programs which can calculate anything calculatable. Your computer is a Turing machine (in the limit of infinite memory and time). I'd also suggest that your conscious mind is Turing Complete."

"Tiers of 'Programming Ability' / 'Reasoning Ability'":

"1. An AI can execute programs"

"2. An AI can verify traits of a program"

"3. An AI can generate novel programs during training"

"4. An AI can generate novel programs post training"

"So far, this library provides an existence proof up to Level 1. It contains code which can execute arbitrary programs, written in a custom lisp dialect, in a fully differentiable setting. (Some tantalizing tests have proven up to Level 3, that AI can learn novel programs to solve toy problems via SGD, but, there are still frontiers of research here)."

By "this library", he is talking about his creation "Neurallambda". What is Neurallambda? It's a dialect of lisp designed to be generated by AI systems but at the same time be human-readable. It also has the important attribute that all the code generated using it is "differentiable". That means the code itself can be incorporated into a stochastic gradient descent model. That's what the "SGD" stands for above. In this form, Neurallambda code can be deterministically translated into tensors, compiled or interpreted, and then the resulting tensors be read back out, and presented in human-readable form.

What do y'all think? Is this a path to computer reasoning?

Neurallambda

#solidstatelife #ai #codellms #computerscience

waynerad@diasp.org

"PostgreSQL and Databricks founders join forces for DBOS to create a new type of operating system"

Funny, I was just watching a video earlier today about how Postgres has expanded from a database system to a complete backend stack.

But here, they're talking about something different.

"Today DBOS announced that it has raised $8.5 million in seed funding as well as the launch of its first product, DBOS Cloud, which provides a new type of cloud-native operating system for cloud application deployment."

"Today, a database is a type of application that runs on top of an operating system, which in the cloud is often Linux. DBOS takes a radically different approach to operating systems by running the operating system on top of a high-performance database."

"Operating system services, such as messages, scheduling and file operations, those are all written in SQL on top of a very high-performance OLTP DBMS [Online Transaction Processing Database Management System]."

"Taking aim at Linux and the Kubernetes container orchestration system and the etcd key value store, they say."

This isn't the first time I've heard of someone saying a database should be at the heart of an operating system. But it's the first time I've heard of anyone making a serious attempt to do it.

PostgreSQL and Databricks founders join forces for DBOS to create a new type of operating system - VentureBeat

#solidstatelife #computerscience #operatingsystems #databases

waynerad@diasp.org

"Mojo vs Rust: is Mojo faster than Rust?"

"Rust was started in 2006 and Swift was started in 2010, and both are primarily built on top of LLVM IR. Mojo started in 2022 and builds on MLIR (Multi-Level Intermediate Representation), which is a more modern 'next generation' compiler stack than the LLVM IR approach that Rust uses. There is a history here: our CEO Chris Lattner started LLVM in college in Dec 2000 and learned a lot from its evolution and development over the years. He then led the development of MLIR at Google to support their TPU and other AI accelerator projects, taking that learning from LLVM IR to build the next step forward: described in this talk from 2019."

"Mojo is the first programming language to take advantage of all the advances in MLIR, both to produce more optimized CPU code generation, but also to support GPUs and other accelerators, and to also have much faster compile times than Rust. This is an advantage that no other language currently provides, and it's why a lot of AI and compiler nerds are excited about Mojo. They can build their fancy abstractions for exotic hardware, while us mere mortals can take advantage of them with Pythonic syntax."

The article goes on to describe Mojo's native support for SIMD which stands for "Single Instruction, Multiple Data" and refers to special instructions that have been part of CPUs for a long time but are hard to use.

Mojo frees memory on the last use of an object, instead of waiting for when an object goes out of scope, a subtle difference that makes a big difference in the field of AI, "where freeing an object early can mean deallocating a GPU tensor earlier, therefore fitting a larger model in GPU RAM." It's also advantageous in a type of optimization called tail call optimization that applies to recursive functions.

Mojo vs Rust: is Mojo faster than Rust?

#solidstatelife #ai #computerscience #programminglanguages #python #mojo #rust

waynerad@diasp.org

"Flying Carpet: Send and receive files between Android, iOS, Linux, macOS, and Windows over ad hoc WiFi. No shared network or cell connection required, just two devices with WiFi chips in close range."

"Don't have a flash drive? Don't have access to a wireless network? Need to move a file larger than 2GB between different filesystems but don't want to set up a network share? Try it out!"

Interestingly, if you scroll down, you'll find this app was ported from Go to Rust, because of problems with the Go version. Turns out this wasn't a good use case for Go.

"There were several issues I didn't know how to solve in the Go/Qt paradigm, especially with Windows: not being able to make a single-file executable, needing to Run as Administrator, and having to write the WiFi Direct DLL to a temp folder and link to it at runtime because Go doesn't work with MSVC. Plus it was fun to use tokio/async and windows-rs, with which the Windows networking portions are written. The GUI framework is now Tauri which gives a native experience on all platforms with a very small footprint. The Android version is written in Kotlin and the iOS version in Swift."

spieglt / FlyingCarpet

#solidstatelife #computerscience

waynerad@diasp.org

Large language models and the end of programming.

How much does it cost to replace one human developer with AI? Matt Welsh (co-founder of Fixie.ai and former Harvard, where this talk is given, computer science professor) did the math.

So let's say that a typical software engineer salary in Silicon Valley or Seattle is around 220,000 a year. That's just the base salary, doesn't include blah blah blah. He figures it adds up to $1,200/day. That sounds a bit high to me, but whatever -- it's the order of magnitude that matters here. Because he next figures the cost for a GPT model for the same output is $0.12. Twelve cents.

Ratio is about 10,000.

This suggests "a very large shift in our industry".

If I can compress and paraphrase the "very large shift in our industry" that he predicts is coming, it would be this: What we today call "prompt engineering" will become "engineering", and what we today call "engineering" will be done by the machines.

Humans will continue to play the role of "product manager" -- deciding what software needs to be written based on what customers want. AI will write the software. AI will maintain the software on an ongoing basis. Human programmers will become AI-generated software proofreaders. But eventually people will stop caring about whether the software is readable or maintainable to humans. As AI models improve, the software will achieve the same reliability as software written by humans, and people will gain confidence in it. AI will also write the tests. AI will go from low-level abstractions to high-level abstractions. AI will be able to write code to do tasks humans can't write the code for, like "transform this text to make it kid-safe."

Large language models and the end of programming - CS50

#solidstatelife #ai #genai #llms #economics #computerscience

analysisparalysis@pod.beautifulmathuncensored.de

Petals dropped. Now you can use large models on a single gpu it says.
Reminds me of the “more expensive setup is better” argument that makes people buy new graphics cards and CPUs.

But what we need to define is application, not numbers.

Just like “With iPod, Apple has invented a whole new category of digital music player that lets you put your entire music collection in your pocket and listen to it wherever you go” and not “we invented a X GB mp3 player with Y KB cache”.

Application focus, not specs focus.

My goal is to have a model that gives accurate answers to questions about a document and that has the “decency” to admit that the answer cannot be found in those documents.

There are several options out there with local llms, yet none of them can be configured.

If a reply is bad, all you can do is choose another model. People hope that bigger was better, so they try to stuff huge and even bigger(petals) models into their computer, but what do those models really do?

They contain VAST corpuses on all kinds of topics. I assume, you won’t need 99% of those billions of parameters in your entire lifetime.

THIS is where you need to start: limit models by application. If you only search in English, don’t get a model that also contains Urdu.

If you only talk about computer science, don’t get that model that contains psychology.

Now the problem is that there are no models available that are specific - and good at what they do there.

To summarize, we need models that are specializable, by defining requirements and creating a new model based on that, which is much faster, smaller and gives you the right results.

EDIT: Another issue is what if the answers don’t satisfy you, would a human give better answers? Is it the model, what is it? You only have a few parameters like temperature and that’s it. Besides, a huge issue is model entitlement, the dreaded “This is not morally okay, and I will not”
Shut up, you are living on MY computer, I tell you what you do and don’t do.

Yet I feel like running against windmills, nobody debates that anywhere and people just report about the great, now 10 times more parameter model as if that saved humanity. #AI #MachineLearning #Models #ApplicationFocus #SpecsFocus #ComputerScience #English #Urdu #SpecializableModels #Accuracy #DocumentAnalysis #GPU #Hardware #Technology

waynerad@diasp.org

"Engineering leaders have long sought to improve the productivity of their developers, but knowing how to measure or even define developer productivity has remained elusive. Past approaches, such as measuring the output of developers or the time it takes them to complete tasks, have failed to account for the complex and diverse activities that developers perform. Thus, the question remains: What should leaders measure and focus on to improve developer productivity?"

"Today, many organizations aiming to improve developer productivity are finding that a new developer-centric approach focused on developer experience (also known as DevEx) unlocks valuable insights and opportunities."

So, subjective experience?

"Developer experience encompasses how developers feel about, think about, and value their work. In prior research, we identified more than 25 sociotechnical factors that affect DevEx. For example, interruptions, unrealistic deadlines, and friction in development tools negatively affect DevEx, while having clear tasks, well-organized code, and pain-free releases improve it."

"A common misconception is that DevEx is primarily affected by tools. Our research, however, shows that human factors such as having clear goals for projects and feeling psychologically safe on a team have a substantial impact on developers' performance."

They go on to say the "three dimensions of DevEx" are feedback loops, cognitive load, and flow state.

DevEx: What actually drives productivity

#solidstatelife #computerscience #developers

waynerad@diasp.org

"Hero C Compiler is a C compiler that allows you to compile your C codebase (with limitations) to SPIR-V for the Vulkan graphics API. This means you can share struct's, enum's and functions between your CPU & GPU code. HCC targets the future of GPU programming so is designed around features such as bindless resources and scalar alignment. This makes it easier to interop with the GPU and focus on writing shader code without writing your own shader build system."

I thought this was a pretty interesting idea: a C compiler specifically designed to share data between CPUs and GPUs. I wouldn't've thought the C compiler would be the place to address this, but maybe it is.

Vulkan is a cross-platform GPU API for video games designed to replace OpenGL, Direct3D (on Windows), and Metal (on Apple devices). It originated at Valve but has since been spun out into a separate organization, the Khronos Group. The SPIR-V mentioned is part of the Vulcan API and pertains specifically to shaders.

Hero C Compiler

#solidstatelife #computerscience #programminglanguages #gpus #vulkan

christophs@diaspora.glasswings.com

Israeli computer pioneer passes away just weeks after famed research partner

Prof. Jacob Ziv, one of the prominent Israeli researchers in computer science and former president of the Israel Academy of Sciences and Humanities, passed away Sunday aged 91.
Prof. Ziv, together with Prof. Abraham Lempel, who passed away in February at the age of 86, developed the Lempel-Ziv (LZ) data compression algorithm, which paved the way for the development of formats such as ZIP, PDF, and MP3.

#RIP #computerscience

https://www.ynetnews.com/business/article/bj2k2g0x3

waynerad@diasp.org

C-rusted is a new system for applying the safety features of Rust to the venerable C language. The developers are following in the footsteps of TypeScript. They say:

"C-rusted is a pragmatic and cost effective solution to up the game of C programming to unprecedented integrity guarantees without giving up anything that the C ecosystem offers today. That is, keep using C, exactly as before, using the same compilers and the same tools, the same personnel... but incrementally adding to the program the information required to demonstrate correctness, using a system of annotations that is not based on mathematical logic and can be taught to programmers in a week of training."

"Only when the addition of annotations shows the presence of a problem will a code modification be required in order to fix the latent bug that is now visible: in all other cases, the code behavior will remain exactly the same. This technique is not new: it is called gradual typing, and consists in the addition of information that does not alter the behavior of the code, yet it is instrumental in the verification of its correctness. Gradual typing has been applied with spectacular success in the past: Typescript has been created 10 years ago, and in the last 6 years its diffusion in the community of JavaScript developers has increased from 21% to 69%. And it will continue to increase: simply put, there is no reason to write more code in the significantly less secure and verifiable JavaScript language."

They celebrate the greatness of C, citing such things as:C compilers exist for almost any processor, C compiled code is very efficient and without hidden costs, C is defined by an ISO standard, C, possibly with extensions, allows easy access to hardware, C has a long history of usage, including in critical systems, and C is widely supported by all sorts of tools. The cite disadvantages, such as he fact that C code can efficiently be compiled to machine code for almost any architecture is due to the fact that, whenever this is possible and convenient, high level constructs are mapped directly to a few machine instructions, but given that instructions sets differ from one architecture to the other, this is why the behavior of C programs is not fully defined, and that is a problem. And of course, memory references in C are raw pointers that bring with themselves no information about the associated memory block or its intended use and there are no run-time checks made to ensure the safety of pointer arithmetic, memory accesses, and memory deallocation, leading to all the problems we are familiar with: dereferencing null and invalid pointers, dangling pointers (pointers to deallocated memory), misaligned pointers, use of uninitialized memory, memory leaks, double-freeing memory, buffer overruns, and so on.

Since those of you who are familiar with Rust know its claim to fame is the borrow-checking system to ensure memory integrity, I'm going to jump right to the description of how C-rusted handles memory:

"C-rusted distinguishes between different kind of handles:"

"Owner handles: An owner handle referring to a resource has a special association with it. In a safe C-rusted program, every resource subject to explicit disposal (as opposed to implicit disposal, as in the case of stack variables going out of scope), must be associated to one (and only one) owner handle. Through the program evolution, the owner handle for a resource might change, due to a mechanism called ownership move, but at any given time said resource will have exactly one owner. The association between the current owner and the owned resource only ends when a designated function is called to dispose of the resource. Note that an owner handle is a kind of exclusive handle."

"Exclusive handles: An exclusive handle referring to a resource also has a special association with it: while the resource cannot be disposed via an exclusive non-owner handle (only an owner handle allows that), the exclusive handle allows modification of the resource. As a consequence of this fact, no more than one usable exclusive handle may exist at any given time: moreover, the existence of an usable exclusive handle is incompatible with the existence of any other usable handle."

"Shared handles: A shared handle referring to a resource can be used to access a resource without modifying it. As read-only access via multiple handles is well defined, there may exist several shared handles to a single resource. However, during the existence of a shared handle, no exclusive handle to the same resource can be used."

C-rusted in a Nutshell

#solidstatelife #computerscience #programminglanguages #rust

waynerad@diasp.org

Huawei devices now run HarmonyOS. "Since its introduction, the software has been receiving backlash from the media, especially from non-Chinese. HarmonyOS is criticized as an Android clone. However, Huawei has been denying this since the beginning. Recently, the man behind the software reaffirmed that HarmonyOS is different."

"Mr. Wang Chenglu goes by 'Father of HarmonyOS' in China."

"Unlike Android and iOS, HarmonyOS is designed for multiple devices. It is a unified OS that supports flexible deployment."

"The software uses AOSP (Android Open Source Project) components, which comprise code from the open-source community."

I first heard of HarmonyOS (called Hongmeng in Chineseand not to be confused with SerenityOS) in 2019, following news about the US Department of Commerce putting restrictions on Huawei (due to its doing business with Iran in violation of sanctions), and figured Huawei started development on it in response to US government restrictions and rhetoric, but apparently development on HarmonyOS actually began in 2012.

HarmonyOS is said to be a multikernel operating system, which means it treats a multi-core machine as a network of independent cores, as if it were a distributed system. Which seems whacky to me, and likely to make your system unnecessarily complicated. But maybe they thought of some way I don't know about to extend that to a multi-computer distributed system more easily than usual. Usually making a distributed system is hard. But maybe having a inter-process message-passing system built directly into the OS, using that for communication on one machine, and extending that system to communicating off the machine, makes it easier for Huawei to achieve their goal of making it easy for "Internet of Things" devices to communicate with Android devices that use the AOSP project as noted, and regular computers, network components that Huawei makes like network routers, and other devices. My experience is that local communications and remote communications should be treated differently, because when engaging in remote communications, there's encoding and error conditions that don't apply in the local case.

#solidstatelife #computerscience #operatingsystems #huawei

https://www.gizmochina.com/2023/01/03/harmonyos-neither-android-nor-ios/

christophs@diaspora.glasswings.com
waynerad@diasp.org

"The world's first programming language based on classical Chinese is only about a month old, and volunteers have already written dozens of programs with it, such as one based on an ancient Chinese fortune-telling algorithm."

The core of the language "includes a renderer that can display a program in a manner that resembles pages from ancient Chinese texts."

I lucked out that the language used for programming languages is my native language, English.

"Currently wenyan-lang contributors are working on transpilers for Python, Ruby, JavaScript, C++, and Java, libraries for graphics and the graphical user interface (GUI)" and "Lingdong Huang is currently working on an introductory guide to programming in wenyan-lang that is itself written in classical Chinese."

World's First Classical Chinese Programming Language

#solidstatelife #computerscience #programminglanguages #chinese

waynerad@diasp.org

What makes a programming language "sticky"? What determines what languages grow and what languages die? According to Chris Hay, it's got nothing to do with how good the language is as a language. He says it depends on 3 things: 1) Raison d'ĂȘtre -- the original reason the language was created, 2) platform, 3) ecosystem.

Warning: Opinions.

"If we take a look at some of the other languages that are not on the list but will appear in some of the other stuff, things like Go which is created by the folks at Google, back in 2007, very similar reasons to Rust. A couple of different paradigms but very similar reasons. They hated C++, they wanted a simpler language, they wanted something that could deal with concurrency, and Go came off the back of that." -- No no no, Go and Rust were not created for similar reasons, not at all.

I actually like that this guy raises the subject of what he calls the "raison d'ĂȘtre". I've alluded to this before: one of my main theories of technology these days is that technologies never forget the reason they were originally invented, no matter how they are bent and twisted afterwards into doing something different. This is a lesson I've learned from PHP and Go and reflecting deeply on why they are the way they are and what they are and are not good for. I've had to use PHP at work for years, introduced Go on some small projects years ago, and only recently got authorization to use Go in the main project, which is otherwise a PHP monolith. The thing about PHP is that, you would think, since it was born as a language for making websites and has been continuously improved ever since, that it would be the best language for making websites... but it isn't. And it turns out to understand why, you have to discard the phrase "making websites" because it is too vague, and instead replace it with "web development" vs "web templating". It turns out PHP was born to do web templating. Go was born to do web development. And the more fundamental point I'm making here is that, when examining a technology's "raison d'ĂȘtre", you have to care about the details -- you have to care about the subtleties. You can't just do as this guy does in the video and read a sentence or two from the original founder(s) and call it a day. That's how you end up saying things like "Go was created for very similar reasons to Rust", which is just flat out wrong.

This is the point where I started banging out a long rant about PHP and Go and in the interest of saving time, I decided to just scratch that and jump straight to the conclusion: PHP was invented for web templating while Go was invented for web development, where we define the distinction as follows: Web development means writing the code for the actual logic of an application -- what data it stores and retrieves and by whom and who is able to access it, what data is moved across the network to and fro other parts of the internet, and what computations are done -- while web templating means the visual appearance of a web application, the "outer skin" -- the choice of fonts, the colors, the spacing and layout, the use of icons, etc. PHP remains to this day the best web templating language in existence and a ton of web templates, if they're not in straight HTML, are in PHP. But you are in for trouble if you try to go beyond templating to implementing the complete logic of your application. (By the way, if you go to the PHP website, it will tell you PHP is a "recursive" acronym (an acronym that contains itself) that stands for "PHP: Hypertext Preprocessor" -- but PHP originally stood for "Personal Home Page" -- much more indicative of its original purpose.)

Go, on the other hand, is the best language in existence for making any program that sits out on a server out on a network somewhere and answers requests from the network, whether they come from an end user, with say a browser or mobile phone, or another computer, such as through an API call. Go was invented to do precisely that (by Google which does a lot of that sort of thing) and this can't be said of any other language that is used to program servers. Java was invented for interactive TV systems, JavaScript was invented so people could add simple scripts to web forms with the Netscape browser, Python was invented to be a shell scripting language that's a "real" language, Ruby was invented to combine object-oriented concepts with Lisp-like functional concepts, and so on.

It might be worth expanding slightly on that list. Most companies using PHP switched to Java after their PHP codebases became unmanageable, most famously Facebook. Java's "write once run anywhere" philosophy was conceived because it was conceived as a language to run on TV set top boxes made by lots of different manufacturers, and given that, it's current use as a language for Android phones, also made by lots of different manufacturers, isn't too far from its original raison d'ĂȘtre -- better than programming servers. JavaScript was put in as a language for web forms, but fortuitously the guy who came up with it based in on Scheme (modified to use curly braces like Java because "Java" was all the rage at the time), which made it a sufficiently powerful language that real, large applications could be built with it. I think the only reason it's ended up on the server is because "front end" programmers who were doing everything in JavaScript on browsers (because they had to) wanted to use the same language on servers to keep their lives simple (even though nobody should ever use JavaScript on servers). Ruby wasn't conceived as a web language at all and only took off with Rails, the web framework (and as noted in the video, has declined as the functionality of Rails has been replicated in all other languages). Python was conceived as a way of writing shell scripts with a "real" programming language, and it's current use as a "glue" language for "gluing together" functionality provided by massive C++ packages like PyTorch is maybe not too dissimilar to the basic idea of a shell scripting language, which is to glue together Unix commands with simple logic.

Anyway, this brings me to Rust. Rust was born at Mozilla by a developer working on the web browser itself who was frustrated by the difficulty of writing C++ code without bugs -- especially the kind of horrid memory-management bugs that are especially hard to track down. I'm going to skip going into detail of the technical features of the language that address this, like I am for all the languages to try to keep this short, but suffice to say Rust has a unique memory-management system that addresses "lifetime management" issues (memory isn't leaked because the programmer forgot to free it, or freed twice, etc) and "concurrency" issues (race conditions, etc), in such a way as to not sacrifice performance compared with C++ (the language the browser was written in) and have deterministic real-time performance -- unlike a garbage-collected language like Go -- in Go's use case the garbage collector is ok because Go programs sit on a server somewhere out on a network and answer requests from the network, and that usually involves allocating some memory to process some data and generate a reply, which all gets freed up when the request is done.

I know I sound like I'm picking on the guy in the video but I've learned these subtitles really matter. He thinks Rust and Go were created for similar reasons, and maybe that's true in some superficial way, but when you look at the details, you realize the reasons for their invention are completely different and you shouldn't use Rust to make a web application that runs on a server and you shouldn't use Go to write a web browser from scratch. (Or a video game engine or anything else that requires high-performance, multithreading, and real-time constraints.) Technologies remember what they were born to do forever.

Maybe his latter two points -- platform and community -- hold greater merit. I don't have much to say about them so I guess I'll just let you watch the video and let his comments on those matters stand.

One of the things he mentions is Python in data science and it's really true, Python really dominates that space. Most of the work is done in Jupyter Notebooks, which are excellent for interacting with data and writing and playing with code to interactively explore data. But this concept was actually pioneered by Mathematica and the Wolfram language. And since this is the Wolfram language's "raison d'ĂȘtre" right from its inception way back in the 80s, you would expect the Wolfram language to dominate this space, but the company charges a lot of money for Mathematica, and kept the Wolfram language proprietary. I haven't used it so I can't say if there's some subtlety that would have kept the Wolfram language from taking over the world of data science had it been open source. Assuming there wasn't any, then Wolfram left the door open for Python to come in a take over.

is Rust and Go the new Ruby and PHP? what makes programming languages sticky and why they die - Chris Hay

#computerscience #programminglanguages

waynerad@diasp.org

Does P = NP? A proof has come out saying yes.

The P = NP question is a longstanding question in computer science. The informal intuition is, if a proposed solution to a problem can be quickly checked to see if it is correct or incorrect, can solutions be generated just as quickly?

More formally, "quickly" means polynomial time while "not quickly" means exponential time or anything larger (such as factorial time) ("polynomial time" in turn meaning the time it takes is proportional to a polynomial of the size of the input data), and "NP" meaning "nondeterministic polynomial time" means solutions can be verified quickly. P = NP means a proposed solution that can be checked in polynomial time can be generated in polynomial time. P != NP means it can't.

Most computer scientists believe P != NP. In fact the entire field of cryptography is more or less built on this assumption.

I looked at the "proof" (using quote marks for now since it has not been verified or falsified by other mathematicians as far as I know yet). It is 10 pages and starts with the familiar Turing machine but quickly goes over my head.

A polynomial-time algorithm for deciding the Hilbert Nullstellensatz in funky P sub n super Z sub 2. A proof of P=NP hypothesis

#mathematics #computerscience #solidstatelife

waynerad@pluspora.com

"Keeping a project bisectable". This is the first I've heard of this "git bisect" command and it sounds intriguing. Has anyone out there used it?

"A 'bisectable' project is a project where one can reliably run git bisect, which is a very useful command to find a commit that introduces a bug. It works doing a binary search in the git history until finding the guilty commit. This process involves building each step of the bisect and running a test on each build to check if it's good or bad (that you can magically automate with git bisect run). The problem is, if you can't compile, you can't tell if this commit is before or after the bug (it can even be the culpable commit itself!). Then you need to jump and try another commit and hope that it will compile, making the process more painful. A lot of build breakages along the commit history can easily discourage a brave bisecter."

This made it sound like "git bisect" runs your tests, but upon reading the documentation I see in fact normally it is a series of subcommands that step you through the process and you run your tests and tell it good or bad/old or new. It does allow you to write a program that it can execute as a command line to automate the whole process, though.

Keeping a project bisectable - tony is coding

#computerscience #tdd #git