Curious about what sort of coding capabilities ChatGPT has? Today, we put them to the test by getting ChatGPT to write a web design tutorial.
Artificial intelligence has been making waves across various industries, and the field of web design is no exception. The talk of the town has definitely been ChatGPT. But what can ChatGPT do, exactly? What is it capable of?
That’s a question a lot of people have been ruminating on as of late, so today, we’ve set out to explore this question by putting OpenAI’s language model, ChatGPT-4, to the test. We’ll examine how effective it is at writing a full tutorial on creating an interactive photo gallery for a website. But before we get into the specifics of our test, a few explanations.
What Can ChatGPT Do?
ChatGPT, developed by OpenAI, is a sophisticated AI model that has been trained on diverse internet text. It’s a language prediction model, meaning it generates text based on the prompts it’s given. It can write essays, answer questions, generate creative stories, translate languages, and even simulate conversations.
However, its ability goes beyond just generating text—it can also provide technical tutorials, including those related to web design. But the quality of the output depends significantly on the quality and specificity of the prompt given to the AI.
The Plan for This Article
To assess what ChatGPT can do when it comes to crafting a web design tutorial, we’re going to give it three different prompts. These prompts will task the AI with writing a tutorial on how to create an interactive photo gallery for a website, a common and important feature in web design.
Our prompts will vary in detail:
Simple Prompt: We’ll start with a basic, general prompt asking ChatGPT to write a tutorial on creating a photo gallery.
Detailed Prompt: Next, we’ll provide a more detailed prompt that specifies certain features we want in the photo gallery.
Highly Detailed Prompt: Finally, we’ll provide a highly detailed prompt with specific requirements on how the photo gallery should look and function.
Following each of these tutorials, we’ll examine the accuracy of each by putting them to the test within CodePen.
Prompt #1: Basic Tutorial
To begin, we gave ChatGPT the following prompt:
And it provided a thorough response to this prompt. Before getting to the code, its introductory text was as follows:
Before we start, make sure you have a selection of images ready to use. For the purposes of this tutorial, I’ll refer to them as image1.jpg, image2.jpg, etc.
So far, so good. It then got down to the specifics, first with some HTML:
The first step is to build the HTML structure for the gallery. We’ll use a
<div> to contain the gallery and
<img> elements for each photo.
And the code snippet output like this:
Next up, ChatGPT produced some CSS:
It then finished off this basic image gallery tutorial with the following parting advice:
<script> tag in the HTML itself.
All looks good so far, to be honest. But still, we had to put this to the test.
In CodePen, we pasted the code snippets provided by ChatGPT into the corresponding fields for HTML and CSS:
Good news! Everything worked just as it should. When clicking the image placeholders, the imageClick function performed beautifully:
But to get the full effect here, we had to add some custom image links to the HTML section, and give them some unique alt text:
Now we can test clicking the images:
Once again, everything worked just as it should.
This was a super basic prompt, of course. So it’s not all that surprising that it worked really well.
But what happens when we increase the difficulty?
Prompt #2: A Detailed Tutorial
To begin this attempt, we gave ChatGPT this prompt:
This time around, ChatGPT assumed I wanted a single HTML document. So, it first provided the HTML structure required:
Then it offered a second HTML output that had CSS contained within it:
The code snippets are all correct but they’re structured strangely and it’s all definitely presented in a much more complicated way than it needed to be.
To get something more useable to test in CodePen, we prompted ChatGPT as follows:
This cleaned up the output for the HTML:
So we could paste it directly into CodePen for testing:
This is what the grid-based gallery output looks like in CodePen once custom images have been added:
And when you click on an image in the gallery we get a lightbox effect; a larger version is displayed with a close button and the background turns gray:
All in all, the second prompt worked well, too. You’d need to do some customization to make it look good but the basic functions are present and we didn’t have to edit any of the code to make it work.
Prompt #3: Highly-Detailed Prompt
Now, onto our third (and last) prompt test with ChatGPT. The next prompt is still going to attempt to have the AI create an image gallery, but there will be many more specifics levied at the LLM. Here’s our prompt:
That’s an awful lot for ChatGPT to handle at once, so let’s see if it can do it!
Output appears to be pretty successful, with a similar format as the previous prompts. Here’s how the HTML turned out:
And the CSS:
But of course, with several variables on deck, let’s see how it pans out in CodePen:
Initial indicators here are that no errors are detected. So far, so good.
After adding in some image URLs to the HTML code snippet, we’re given this:
And when clicking on an image, a modal window opens. The background turns gray and a blue close button appears. You can also close the modal window by clicking the blue × and by clicking off the image.
Interestingly enough, ChatGPT got a bit creative and added a hover effect on the close button. The blue × turns red on hover. If you remember our prompt, that wasn’t in our instructions.
Once again, our test prompt was a success. No edits required to get it to complete the tasks assigned.
As a final challenge, we prompted ChatGPT to generate a prompt for creating a more complex gallery. Here’s what it came up with:
Each image should be enclosed in a figure element with a corresponding figcaption element providing a brief description of the image.
When a user clicks on a thumbnail image, it should open up a full-sized version of the image in a modal window, centered on the screen, with the rest of the webpage darkened in the background. The modal window should also display the image’s description from the figcaption element, and have a ’close’ button to close the modal.
On the top of the gallery, add a dropdown filter that allows users to sort images based on categories. Each image belongs to a category and that data is stored in a ’data-category’ attribute on the image’s figure element. The dropdown filter should be populated dynamically based on the ’data-category’ values in the HTML. When a category is selected from the dropdown, only the images that belong to that category should be visible.
The CSS should use Flexbox for the layout of the grid and the modal window, and all color values should use CSS custom properties for easy theming and adjustments.
Out of curiosity, we fed this prompt back to ChatGPT and it did, indeed, produce a tutorial. And when tested in CodePen (and after adding image links, categories, and descriptions) this is what it produced:
Unfortunately, this much-more-complicated prompt didn’t work out so well. If you select a category from the dropdown, there’s no way to get back to the view that shows all images.
Now, clicking an image does make the image larger, fade the background, and display a description. Unfortunately, the description text didn’t display. And the close button was nearly unclickable.
What Does This All Mean?
In our experiment, ChatGPT has demonstrated its ability to generate tutorials based on varying levels of detail in the prompts. Its proficiency at crafting clear and concise explanations about web design concepts, such as creating a photo gallery, is remarkable. It’s clear that AI can indeed be an effective tool for generating initial code snippets or kick-starting a coding project. But it struggles with complexity.
The critical takeaway here should not be overlooked: the notion that one can take these code snippets produced by AI and run with them, without a solid understanding of the underlying principles, is unrealistic. As powerful as AI tools like ChatGPT are, they are not a substitute for a deep understanding of the subject matter.
Knowing how to code means understanding not just how to assemble syntax, but also knowing why certain choices are made, how different parts of the code interact, and how to troubleshoot when things don’t go as planned. It means being able to adapt the code to your specific needs and being able to modify and extend it as those needs change.
Ultimately, ChatGPT can be a valuable resource for learning and exploration, providing a useful starting point and helping to generate ideas. However, the onus remains on the learner or developer to understand the generated code and to ensure that it fits their unique requirements.
Learn More About AI and ChatGPT on Tuts+
- Can You Use ChatGPT in Web Design? More Importantly, Should You?Suzanne Scacca07 Mar 2023
- What is Machine Learning (and How Does it Impact Designers)?Brenda Barron19 Jun 2023
- 13 Best AI Plugins for WordPress and WooCommerce for 2023Franc Lucas30 Jun 2023
- How Web Designers Can Make Themselves Competitive in the Age of AISuzanne Scacca04 Jul 2023
- AI in Web Design: 6 Templates Leveraging Machine Learning for Smarter UXBrenda Barron06 Jul 2023
- Comparing AI-Based Prototyping Tools: Which One Is Right for Your Web Design Project?Brenda Barron28 Jun 2023
- Is AI an SEO Killer?Brenda Barron01 Aug 2023
- 6 Coding Languages You Need to Learn to Get Into Machine LearningBrenda Barron02 Aug 2023