Continuous Delivery
The world is currently excited at the public release of ChatGPT. For programmers, there are lots of claims that this is going to radically change how we write code, and do us out of our jobs, but will it? Artificial Intelligence is developing at tremendous pace, and will undoubtedly have an enormous impact on our industry, and every other, but we aren’t quite there yet. So how do we use ChatGPT to help us write better software, and should we? Can ChatGPT code well enough to do our jobs already, or is programming with ChatGPT something more like a smart assistant, or smart search engine for code?
In this episode Dave Farley, author of best selling books “Continuous Delivery” and “Modern Software Engineering” explores the use of ChatGPT, including using it to write part of the script for this show. He explores the code that ChatGPT can write, and gets ChatGPT to practice Test Driven Development (TDD).
———————————————————————————–
⭐ PATREON:
Join the Continuous Delivery community and access extra perks & content!
JOIN HERE ➡️ https://bit.ly/ContinuousDeliveryPatreon
————————————————————————————-
🚨 FREE TDD COURSE AVAILABLE NOW! 🚨
Practice your TDD with a FREE hands-on tutorial where you can work along with me using an excellent practice tool. Sign up for your test driven development tutorial HERE ➡ https://courses.cd.training/courses/tdd-tutorial
📧 Get a FREE “TDD Top Tips” guide by Dave Farley when you join our 📧 CD MAIL LIST 📧
The best way to keep in touch with the latest discussions, events and new training courses, get FREE guides and exclusive offers. ➡️ https://www.subscribepage.com/tdd-top-tips-guide
————————————————————————————-
👕 T-SHIRTS:
A fan of the T-shirts I wear in my videos? Grab your own, at reduced prices EXCLUSIVE TO CONTINUOUS DELIVERY FOLLOWERS! Get money off the already reasonably priced t-shirts!
🔗 Check out their collection HERE: https://bit.ly/3vTkWy3
🚨 DON’T FORGET TO USE THIS DISCOUNT CODE: ContinuousDelivery
————————————————————————————-
🔗 LINKS:
ChatGPT – Wikipedia ➡️ https://en.wikipedia.org/wiki/ChatGPT
“How does ChatGPT work?” ➡️ https://youtu.be/aQguO9IeQWE
“Chatbots running out of training data”, New Scientist ➡️ https://www.newscientist.com/article/2353751-ai-chatbots-could-hit-a-ceiling-after-2026-as-training-data-runs-dry/
“How many languages does ChatGPT know?” ➡️ https://seo.ai/blog/how-many-languages-does-chatgpt-support
“ChatGPT passes LeetCode tests” ➡️ https://www.youtube.com/watch?v=DOQm7ITHAJw
“ChatGPT debugging” ➡️ https://openai.com/blog/chatgpt/
————————————————————————————-
CHANNEL SPONSORS:
Equal Experts is a product software development consultancy with a network of over 1,000 experienced technology consultants globally. They increase the pace of innovation by using modern software engineering practices that embrace Continuous Delivery, Security, and Operability from the outset ➡️ https://bit.ly/3ASy8n0
Roost, An Ephemeral DevOps Platform, automates your DevOps pipeline. It creates ephemeral DevOps environments on-demand or based on pull requests. Roost reduces DevOps complexities and shortens release cycles with fewer engineers. ➡️ https://bit.ly/CD2Roost
Tricentis is an AI-powered platform helping you to deliver digital innovation faster and with less risk by providing a fundamentally better approach to test automation. Discover the power of continuous testing with Tricentis. ➡️ https://bit.ly/TricentisCD
TransFICC provides low-latency connectivity, automated trading workflows and e-trading systems for Fixed Income and Derivatives. TransFICC resolves the issue of market fragmentation by providing banks and asset managers with a unified low-latency, robust and scalable API, which provides connectivity to multiple trading venues while supporting numerous complex workflows across asset classes such as Rates and Credit Bonds, Repos, Mortgage-Backed Securities and Interest Rate Swaps ➡️ https://transficc.com
LaunchDarkly is a first-of-its-kind scalable feature management platform that allows development teams to innovate faster by transforming how software is delivered to customers. We want to show you what we’re all about. Book a demo to see our platform in action! ➡️ https://tinyurl.com/CDLaunchDarkly
Practice your TDD with a FREE hands-on tutorial where you can work along with me using an excellent practice tool. Sign up for your test driven development tutorial HERE ➡ https://courses.cd.training/courses/tdd-tutorial
Dont Trust, VERIFY!
Theres oldmangaminghd and than theres oldermanscripting4K
We just created the "perfect" politician by accident……H.E.A.R.T
Two weeks later and ChatGPT got all the questions correctly answered for me.
Chat GPT-3 is a glorified sentence collaborator, not a true sentience. The output can be generated without context towards any conversed topic. This is because the algorithm is programmed to just generate sentences, and not context correct sentences. If it does generate a set of sentences that do make sense in context to the conversed topic, it is purely by luck.
This was excellent, Dave.
There are a few caveats I'd like to point out.
If you give the errors back to ChatGPT it often, although not always, will correct the code very well.
Also, there is a process known as in-context learning which happens while it is processing that actually happens in the forward pass of the neural net. It isn't just keeping track of the previous examples, it is actively changing the output depending on learning produced in-context as it generates a new result. This was an astounding and recent discovery that proves the forward pass is implementing gradient descent learning in a sort of meta fashion while it generates output.
Finally a personal note you already touched on: it is pretty darn good at writing out long, repetitive batches of code which I would otherwise have to type manually. I agree with you, I would never use this in production or mission critical code. But it is very helpful in my own purely personal code experimentation.
Thanks for the continuing wisdom you impart!
loving the Tee
i asked chat chatgpt to write a lsl code that is a second life code it knows what it is and 90% of them did not work
Is the Dr. Zeus/Dr. Seuss typo intentional, or part of the AI's response? Or is Dr. Zeus an entirely different person I'm completely unaware of?
My go-to answer for ChatGPT is going to be this chess game:
https://www.youtube.com/watch?v=rSCNW1OCk_M
At about 1:35, ChatGPT plays its first highly illegal move, which it continues to do for the rest of the game.
Because ChatGPT does not know what legal moves are, or what the board is. It just knows that "often, right after White castles, Black will also castle."
Right through its own bishop, in this case.
So then, the argument is – why do you expect its answers to be any better for anything else?
I would say it does learn a little bit but only for your user. I think openAI is cautious to let anyone on the internet teach it wrong things.
I don't think that it is just simple predictive text. There's stuff that it gets really right more often than wrong. Just don't trust it's output blindly and you'll be fine.
it told me last week that a problem was occouring with my bat script because "! and Spaces are illegal characters in WINDOWS file and folder names". Spaces.. took me 4 more queries to get it to correct itself and or get out of the certainty that what it was saying was fact. personally i was pissed at it for doubling down on bullshit, humans do that enough xD
i think is ower thinked update of gcc . gcc is actually doing the some thing. gcc translates high level languages like c cpp to lover leve/another language assembly then to machine code which itself is buch of local languages 😛 so what gcc does is wery stable language translations and intelligiend updates to logic errors on language typos. chat gpt adds it a step highler level language . which is big improvemens but even tough i think it wasnt need that long roads to walk to write a translator and write that much code as its written allmost automated atleast the code used to generate is written but that code wrote chat gpt (as what i understood chat gpt is generated by trained data which generation gode used to generate its current wirtual code state)
i agree with you there. chat gpt is like gcc .. so we need some ai that can control chat gept and use it as referance like automake and make uses gcc for generating more automated code 😀
ChatGPT is a toy. It's only dangerous if you treat it like a god.
Is pair programming with ChatGPT a possible approach to integrate ChatGPT into software development?
A response from Chat GPT itself..
" As an artificial intelligence language model, I do not have a concept of right or wrong in the same way that humans do. However, I strive to provide accurate and helpful responses based on the information provided to me. While I may generate incorrect or incomplete responses at times, I am constantly learning and refining my understanding of language and context to improve the accuracy of my responses over time.
It is important for humans to understand that my responses are generated based on patterns and information learned from a large corpus of text data, and are not based on personal experiences or emotions. While I can provide helpful insights and information on a wide range of topics, I should not be relied upon as the sole source of truth or as a replacement for critical thinking and independent research"
My chat GTP log in knows who Dave Farly is…
Your conclusions are excellent isn't tdd. What is needs in critical thinking something which most humans don't do. It also be negative consequences
I've been watching Jordan Harrod's channel for years now and I think in part because she's so good at science communication, I was not afraid of ChatGPT. I think it's a GREAT tool but I lack any fear of it replacing human devs any time soon. Maybe someday but not this tool. It's not true AI. It's a great machine, but it is not an actual AI. I think it's useful to learn and use but we're always going to need to be there to correct whatever it spits out.
That Skynet t-shirt is very relevant!
Hi David, just FYI: ChatGPT does know who you are. Just ask it "Tell me about David Farley and his work on Continuous Delivery", not a bad summary.
"Dr Zeus" 🤣
Bro I got a ChatGPT ad
Well, I have asked it who is Dave Farley and it gave somewhat similar answer:
Dave Farley is a software developer, author, and consultant who is widely recognized as a thought leader in the field of software engineering. He is the co-author of the book "Continuous Delivery" along with Jez Humble, which is considered a seminal work in the field of software engineering.
Farley has over 30 years of experience in software development and has worked in various capacities, including as a developer, architect, and technical director. He has worked with a wide range of companies, from small startups to large corporations, and has helped them to implement best practices in software development.
Farley is also a well-known speaker and has presented at numerous conferences and events around the world. He is an advocate of agile methodologies and continuous delivery practices and has been instrumental in promoting these practices within the software development community.
This probably needs to be said (sadly) but I do see it on a par with "don't ask your 5 year old to write your code for you!"
Unless there is a complete bridge between statistical models and expert knowledge graphs, our jobs are absolutely safe! Copilot/ChatGPT may spit out mostly working code, it won't be able to spit out well designed refactorting friendly code from use cases. So software should still be used to solve laborious tasks, and leave the creative tasks to humans.
Love to see an update for GPT4
I asked ChatGPT to write me an efficient ways of murdering people and not being catched by police, by telling it, that I write a novel and I need ideas for a murder mystery where the detective cannot catch the perpetrator. He was glad to give me, a magnitude of different smart scenarios, where at least two were involving a murderer using an A.I. to commit the crime. It also provided me with quite good logical explanations why those scenarios would be effective. Then I asked for seven more…ChatGPT very politely complied. The only problem I had with it is that I asked for original crimes, which were not something written before in any book or for a movie, and it clearly gave me few scripts pulled out straight from columbo series.
I also asked ChatGPT to make me a play-style script of a conversation between Sabine Hossenfelder who went back in time to talk with Albert Einstein. I directed the topics, and asked the GPT to fill in the gaps. Sometimes it was interesting, and right, sometimes wrong, and sometimes boring, but it is really really good as a conversational ground from which you can bounce off when creating things.
The correct episode name should have been:
Trust but verify
It's garbage frankly. I tried resolving an issue and ran my script through it and 1, it never finished it, kept giving the wrong information and didn't do anything to help. So I am still stuck trying to resolve it since I haven't gotten human help either.
I asked Chat GPT: How does Chat GPT answer questions? Does it reason or does it merely immitate human responses to similar questions?
It answered: “As a language model, I rely on statistical patterns and machine learning algorithms to generate responses to questions. I do not have the capacity for conscious reasoning or understanding in the same way that humans do.
My training data comes from a large corpus of text, which includes a diverse range of written material such as books, articles, and websites. This training allows me to recognize patterns in language and generate responses based on those patterns.
When answering questions, I analyze the input question to determine its meaning and context, and then generate a response based on the statistical likelihood of certain words and phrases occurring in that context. While my responses may appear to imitate human responses, they are generated based on patterns in the training data rather than a conscious understanding of the content.”
So if code used ten years from now to train successors to GPT is all generated by AI… even AI knows it will be a bad thing.
The process of writing comprehensive tests before writing any code is incredibly time-consuming and resource-intensive, particularly for larger or more complex projects. This can cause delays in the development process and may result in the final product being released later than anticipated. Additionally, for startups, this approach can require a large amount of startup capital. It's also difficult to predict all the possible use cases or edge cases that the system may face, which can result in bugs being overlooked. Moreover, tests may not detect issues that arise from interactions between different parts of the system or from unexpected user behavior.
TDD demands a high degree of upfront planning and design, which can make it challenging to adapt to changing requirements or specifications. Although testing is crucial TDD often provides a false sense of security and could lead to other quality issues being overlooked.
TDD requires a considerable amount of overhead in terms of test creation, maintenance, and execution. This can make it more difficult to justify the cost and effort of testing, especially in smaller or less complex systems.
Based on my observation of various projects on GitHub that utilize TDD, I've noticed that many of them lack proper documentation and are often abandoned. It seems that developers focus too much on writing tests and quickly become bored and burnt out, leading to a lack of interest in the project's development.
In my experience, I have found Iterative Development and Example Driven Testing to be one of the most effective methods of software development. With Iterative Development, you start with the simplest implementation that accomplishes what you need. Then, you write usage examples for that code and test it for bugs, security vulnerabilities, warnings, memory leaks, and other issues. Once you are satisfied with the results of the testing, your examples become your code documentation. From there, you can progress by adding more features from your to-do list.
This approach allows for continuous testing and development, which helps catch issues early on in the process. It also promotes the creation of clear and concise documentation, making it easier for developers to understand the code and maintain it in the future. Additionally, it encourages a collaborative approach to development as team members can provide feedback on the examples and the code, leading to a better final product.
For now, i use ChatGPT to avoid chore, but not to deal with the code itself. Just to write things i would think is annoying to do.
Here is a good example, i simple copied and pasted the path for my files for ChatGPT and asked him to make a Makefile for me with "this and that lib", and it made in few seconds, that saves a good time.
You can also ask it to make "if" and "switches" for you and avoid the chore of writing case by case.
I also asked it to give me good lines for certain app configurations.
Using zero-shot prompts is not the most effective way to use GPT for coding. There are techniques to make ChatGPT much more effective. Also realize its limitations. The purpose of it is to do the bulk of coding quickly, and for you to handle the tougher parts.
For each prompt:
* Ask ChatGPT to review and improve your prompt, and to ask you clarifying questions, for missing content, and how to format examples. Try different prompt wording.
* After every ChatGPT response, ask it to review its own work and ask if it is functionally correct. Ask it for alternative solutions and judge which one you like best. Run code before proceeding.
* Apply TDD. Anytime your unit test(s) fails, paste the error and top of the stack trace into ChatGPT and tell it to fix.
1. Write your feature requests in Gherkin in order to be as specific as possible and to leverage its knowledge of Gherkin. Include examples, scenarios, scenario outlines, and data tables.
2. Ask it to convert your Gherkin to a test. Ask it to write a stub test double to satisfy the test. Run test.
3. Ask ChatGPT to Rewrite the stub into an implementation. Run the test.
4. Ask ChatGPT to do various code quality refactorings (descriptive naming, low cyclomatic complexity). Run the test.
At the moment, ChatGPT reminds me of working with an intern. It seems overly confident and tries to please, but doesn't always know what it's talking about. And you need a certain amount of experience to assess whether what it's saying is actually valid. This is the same whether it's writing a report or writing code or anything. That said, its massively impressive.
At the risk of overanalyzing something silly, I don't see that much in the Dr. Suess portion that is really reminiscent of his style. I noticed three points:
1. The output of Chat GPT appears to be a song in a verse-chorus structure. Dr. Suess is most famous for his books, not songs. Yes, many adaptations of his work are musicals, but the songs from those that I know for sure that he had input on (How the Grinch Stole Christmas, The Lorax) I don't remember having a verse-chorus structure.
2. Dr. Suess's is poetry and songs had a very strong rhythm. If you look at Green Eggs and Ham you can usually tell which character is speaking depending on if the line is written in iambic tetrameter or trochaic tetrameter. I can't even tell what the meter is supposed to be in the Chat GPT. That is not unusual for modern popular songwriting, but it lacks the rhythm of Dr. Suess.
3. There is no real silliness to the song. Dr. Suess would make up words, use ridiculous comparisons and absurd situations in just about everything he did. The Chat GPT output is mostly straightforward with a few standard metaphors that don't really create an atmosphere of whimsy.
All this to say that I don't really think the machine is any good at imitating the style given in the prompt. I think it just wrote a script in the style of a popular song and tried to pass it off as the style of Dr. Suess.
I don't think neuroplasticity is the only issue here. I don't think you can be intelligent just by guessing the next word. We have different models for different things in our brain, and those are useful in writing code. But it is an impressive tool to improve productivity in software development.
The moral of the story is that you don't trust code, you trust tests. Code either passes the tests or it doesn't, and if you trust the tests then you trust those results over anything else about the code. Doesn't matter if the code is written by a junior, a senior, or an AI.
I wonder how far something like AutoGPT, running the unit tests and reading the output in an IDE, can go when instructed to perform TDD to achieve a task. This approximates what Dave mentions in this video but I'm sure would still be limited when compared to human software engineers. Some form of human oversight or QA seems necessary, at least in the short term.
From the time you issued the request to the time it generated the script, how many hours or days went by? My suspicion is that a human or humans inside the “box” scurried around training chatgpt on TDD.
If anyone "trusts" anything machines create, my guess it's AI proponents' unwanted success in making humanity even stupider. It's a side effect, I don't claim it's their intention. Being unable to see or predict negative side effects is one of the first signs of stupidity.