In this tutorial, we'll be wrapping up the texturing content. At the end, we'll have two cubes being rendered with different textures on them.
First, we'll make new shader files for textured objects. They're still really simple, but have some big changes.
Create vs_tex.glsl (and either tell your IDE to copy it to the output directory, copy it manually, or create it there) with:
#version 330 in vec3 vPosition; in vec2 texcoord; out vec2 f_texcoord; uniform mat4 modelview; void main() { gl_Position = modelview * vec4(vPosition, 1.0); f_texcoord = texcoord; }
This is more or less the same as our original vertex shader, but it now passes along a texture coordinate for each vertex instead of a color.
Create fs_tex.glsl (and either tell your IDE to copy it to the output directory, copy it manually, or create it there) with:
#version 330 in vec2 f_texcoord; out vec4 outputColor; uniform sampler2D maintexture; void main() { vec2 flipped_texcoord = vec2(f_texcoord.x, 1.0 - f_texcoord.y); outputColor = texture(maintexture, flipped_texcoord); }
This fragment shader introduces some new concepts, but they're not too complex. The first new concept is the sampler type. This is how we access a texture in our shader. Here we're using a sampler2D, but there are also 1D and 3D versions, for their respective dimensioned textures. The texture function allows us to get the color of a texture at a coordinate.
We have to flip the coordinates because OpenGL uses a coordinate system opposing our own on the Y-axis. There are other options (we could simply flip them when we load images, which would be better optimization-wise), but this also doubles as a demonstration of what we can do with texture coordinates.
These coordinates are normalized, meaning that they range from 0 to 1. Outside of this range, the behavior can be chosen by you. We can "clamp" the texture when it wraps, which makes it so the pixels on the edge of the texture stretch on forever. We can also tell the texture to "repeat", which makes it just loop again and again (for tiled textures). This behavior can be set for each dimension, so a texture can loop in one dimension and not in another.
While we're adding new files, download the following images (they should be named "opentksquare.png" and "opentksquare2.png":
Add these to your project as well. They'll be the textures we put on our cubes.
Now we need to modify Game.cs to use these new shaders.
First we'll add a variable to the class to store information about our texture coordinates, like we do vertices and colors:
Vector2[] texcoorddata;
In initProgram(), we'll need to add code to add our new shader and textures. Replace the existing function with the following:
void initProgram() { lastMousePos = new Vector2(Mouse.X, Mouse.Y); GL.GenBuffers(1, out ibo_elements); shaders.Add("default", new ShaderProgram("vs.glsl", "fs.glsl", true)); shaders.Add("textured", new ShaderProgram("vs_tex.glsl", "fs_tex.glsl", true)); activeShader = "textured"; textures.Add("opentksquare.png", loadImage("opentksquare.png")); textures.Add("opentksquare2.png", loadImage("opentksquare2.png")); TexturedCube tc = new TexturedCube(); tc.TextureID = textures["opentksquare.png"]; objects.Add(tc); TexturedCube tc2 = new TexturedCube(); tc2.Position += new Vector3(1f, 1f, 1f); tc2.TextureID = textures["opentksquare2.png"]; objects.Add(tc2); cam.Position += new Vector3(0f, 0f, 3f); }
The next change is in OnRenderFrame. Replace the existing loop over the Volumes in objects with the following:
foreach (Volume v in objects) { GL.ActiveTexture(TextureUnit.Texture0); GL.BindTexture(TextureTarget.Texture2D, v.TextureID); GL.UniformMatrix4(shaders[activeShader].GetUniform("modelview"), false, ref v.ModelViewProjectionMatrix); if (shaders[activeShader].GetUniform("maintexture") != -1) { GL.Uniform1(shaders[activeShader].GetUniform("maintexture"), 0); } GL.DrawElements(BeginMode.Triangles, v.IndiceCount, DrawElementsType.UnsignedInt, indiceat * sizeof(uint)); indiceat += v.IndiceCount; }
This code checks if the shader accepts a texture and sends along the ID of the texture we've assigned to this object. We're using texture unit 0 for this (there are at least 8 available), but that's not really relevant when we're only using one texture per object.
Now we need to add texture coordinates to OnUpdateFrame.
First, after the definitions of verts, inds, and colors, we'll add another list for texture coordinates:
List<Vector2> texcoords = new List<Vector2>();
Next, we'll add the texture coordinates with the other attributes in the the for loop over the Volumes in objects:
texcoords.AddRange(v.GetTextureCoords());
After "coldata = colors.ToArray();" add:
texcoorddata = texcoords.ToArray();
Finally, after the similar block of code for sending color values, add:
if (shaders[activeShader].GetAttribute("texcoord") != -1) { GL.BindBuffer(BufferTarget.ArrayBuffer, shaders[activeShader].GetBuffer("texcoord")); GL.BufferData<Vector2>(BufferTarget.ArrayBuffer, (IntPtr)(texcoorddata.Length * Vector2.SizeInBytes), texcoorddata, BufferUsageHint.StaticDraw); GL.VertexAttribPointer(shaders[activeShader].GetAttribute("texcoord"), 2, VertexAttribPointerType.Float, true, 0, 0); }
Now, if you run the project you should have two floating cubes, each with their own texture.
____
Up next we'll be loading and displaying a simple obj file (that is, one object defined by vertices, not). I've modified the upcoming tutorials page slightly, since I decided that having a simple model viewer application as an example would be useful.
Welcome back! Can't wait to try this out!
ReplyDeletePete
I'm also enjoying these walkthroughs. I work in the game industry, but know next to nothing about graphics, so this has been really wonderful to have as a guide through OpenGL/OpenTk. Thanks :)
ReplyDeleteSweet! Thanks for posting. :)
ReplyDeletePlease provide downloads of the collected files in the tutorials; its almost impossible to understand your intentions as you do not explain the file modifications from one file to another very well, and I often find myself going back to the very first tutorial and trying to go through every tutorial to find an issue, only to find that I did everything exactly and your just not explaining it very well. If you offered the code as a download it would alleviate all of this trouble and pain, and would let me actually learn something as I can speak code better than I can speak your version of English. Thanks.
ReplyDeleteActually, I had previously started on compiling and uploading the final products, but took the page down after the first one was posted. It's something I've avoided doing because I know it just means people will skip to the finished product instead of reading anything (and I know too many lazy coders who don't know how anything works because they do that).
DeleteHowever, I haven't been able to write anything substantial for months, so I'm planning on uploading the completed tutorial projects and trying to focus more on example projects and code snippets instead.
Feel free to ask any questions here if you can't get something to work in the meantime.
Will you still be doing a tutorial on loading an obj file?
DeleteThat should be next after the 2D example I have planned.
DeleteI'm looking forward to the next part!
ReplyDeleteHi Neo, I really love your OpenTK tutorials! They are really helpful. Though, I had some problems with this last one.. I have trouble to actually load the textures. I copied your code but cannot make it work. Do you have any idea what the problem might be?
ReplyDeleteThanks!!
It seems like it is not possible to find the attributes and uniforms, the counters are 0
DeleteHi,
DeleteCan you please put your Game.cs code on pastebin for me so I can help track down the issue? Alternatively, you might want to look at the completed version on GitHub to see where the difference is.
Firstly, thanks a lot for these tutorials. i tried going through the whole 6 tutorials and they were very good. I have just started openGL and graphics coding last week. So these tutorials were a great source of information. I learnt a lot from them.I also added some mods like a simple .obj parser, implement all the functions you have described into GLControl, drag-drop simple .obj files onto the running GLControl panel to load and draw them etc.
ReplyDeleteBut, I am facing one strange problem. When I add an object into the 'List Objects' using the statement in 'initProgram()'
'Objects.Add(new Cube()); '
or the other shapes in the tutorial , all of them display properly.
I have created a simple obj reader/parser which will read the obj and load its data into a 'myModel' class that implements the 'Volume' abstract class.
these 'myModel' objects also get added to the same 'List Objects;' list as the previously mentioned one.
The thing is, when I add ONE object to the list through my obj parser, it is drawn properly.
When I add multiple shapes (tetrahedron, cube etc.) directly in 'initProgram()' , they are also drawn properly.
Even the situation where I first add ONE object through the obj parser and then other objects directly through 'Objects.Add(new ...)', all of them are drawn properly.
But when I add multiple objects through the obj parser, only the first object is displayed properly. the others keep flickering in lines and shapes.
First,I thought the problem were in the .obj itself. So I made an exact copy of a cube obj file that i confirmed was parsed and drawn properly. Then, when I loaded the original cube obj file and the copy cube obj file, only one of them is drawn. The second one flickers into long lines.
I have tried with multiple shape obj like come and cube, 2 different size cube, tetrahedrons and cones, spheres etc, and in various combination and order of loading, but in each case only the first one is displayed. The second and onwards load and show up as various blinking lines ans surfaces. I have tried fiddiling with various BeginModes, Wireframe modes etc, but the same result.
I assume that adding multiple objects using the Volume abstract class would come in a future tutorial, but do you have any suggestions or ideas as to what is happening?
I must also add that my 'myModel' class was based on your Cube class, except that the vertices, index and colors are defined as arrays and I implemented function to set their values while the getVerts() and other functions remain as such and return the value of the appropriate array rather than the static vector3 array of Cube class.
DeleteHi PRinCE,
DeleteThe code should be able to handle multiple objects of different Volume sub-classes being drawn at once without any issue.
Can you please post your code (or even just put myModel on pastebin) so I can take a look at it? It sounds like you've been doing everything right, but I might be able to track the error down.
Thanks for the fast reply Kabuto.
DeleteHere is the pastebin link for my custom shape class. Currently I have changed it a bit, so now it doesnt inherit from Volume.cs, and i have made the corresponding changes to the GLViewer class. Still, the same problem as above. It seems when Multiple objects are loaded from files, somehow GL seems confused....
I have also pasted the GLViewer code....
http://pastebin.com/HK0Y6mJ3
One more question: Some of the .obj files have faces that are of multiple types, meaning some faces may be triangular, some qud and some polygonal. In this case, does setting the beginmode to polygon do the trick, or should i divide the faces by type into separate buffers and then draw each buffer in a loop with separate beginmode?
Sorry to take so long to reply this time.
DeleteWould you mind sharing your entire project? I was able to use your class with my own OBJ parser, but I'm not experiencing any issues with multiple objects (http://i.imgur.com/Wg6r95b.png -- sorry for the really weird texturing going on).
You'll want to divide the faces by type. They don't need separate buffers if you have a way to know how many faces of each type you have and what order they're in (ideally sorted by type). Just call GL.DrawElements with the "count" parameter set to the number of vertices you need for each BeginMode. Here's an example of the same OBJs with the one third of the indices done with BeginMode.Lines (the models were triangles only, so I had to improvise, but it would work as well if they were quads), and the rest with BeginMode.Triangles: http://i.imgur.com/U6ewlv5.png .
Dividing faces by type seems to be working,. thanks for the idea.
DeleteLet me polish up my project, then i will upload it somewhere and post the link here.
OK,so I have ended up being able to draw multiple objects nicely, in wireframe mode. But when I use the PolygonMode.Fill, I end up with a flat kind of display where the object looks 2D ish. I have tried totaly removing color data so that I could get the default silvery color, but in vain. If I provide color data, it ends up a flat object in that color.
DeleteWhat i mean is a sphere looks a sphere in wireframe, but in the fill mode, it looks like a circle.
My object co-ordinates are large, so I have to use camera position as (0f, 500f, 500f), nearz as 0.1f and FarZ as 500f.
Accordingly, what should be the light position for light0? What i want is that the area facing the screen remains lighted and shows its curves and protrusions properly. If I move the camera or rotate the object, the light should still clearly show whatever is facing towards me. So, how do I do that?
I saw the pics you uploaded, how did you get that silvery color? That is what I am trying to get...
The flat display is because there is nothing being done in the shaders to change the color for each fragment/vertex. The wireframe looks more like a sphere because your mind is able to recognize the depth in the curves.
DeleteThe grey color in my picture isn't anything special: I had the vertex color set to the sine of the Z coordinate at each vertex. It was just my lazy way to make it look shaded before I added proper lighting.
Light0 is part of the old-fashioned "fixed function" pipeline, which is set to be removed from OpenGL in the near future (graphics cards will probably support it for a long time, but it's best to move away from it). The fixed function pipeline features aren't really compatible with the shader based "programmable pipeline" used in modern OpenGL.
To get lighting with modern OpenGL, you'll need a shader that adds it. This is a slightly more involved process, but not too difficult. You'll need to generate normals (they may also be included in your OBJ files to save time) for each vertex of the model, and then use a shader to change the colors of the model to make it look lit (look up Lambertian diffuse for the easiest way to accomplish this). To make the light move with the camera, set the light's position to the camera's position, so no matter what, any model is lit from the side you're looking at it from.
I plan on handling lighting in a later tutorial, but if you want, I can pass along sort of an "Early Access" copy of the shaders I'll use.
I would be very gratefull if you could pass along the shaders. Currently I am loking at Blinn–Phong shading which I came across when I was searching for Lambertian Diffuse as per your suggestion.
DeleteThis comment has been removed by the author.
DeleteAnd, here is the Vertex Shader code i've been trying to get working:
Delete//// Original
//#version 330
//in vec3 vPosition;
//in vec3 vColor;
//out vec4 color;
//uniform mat4 modelview;
//void
//main()
//{
// gl_Position = modelview * vec4(vPosition, 1.0);
// color = vec4( vColor, 1.0);
//}
// Under Construction...
#version 330
in vec3 vPosition;
in vec3 vColor;
in vec3 vNormal;
out vec4 color;
out vec3 normalInterp;
out vec3 vertPos;
uniform mat4 modelview, normalmat;
void
main()
{
gl_Position = modelview * vec4(vPosition, 1.0);
color = vec4( vColor, 1.0);
vertPos = vec3(gl_Position) /gl_Position.w;
normalInterp = vec3(normalMat * vec4(vNormal, 0.0));
}
and the Frag Shader code:
//// Original
//#version 330
//in vec4 color;
//out vec4 outputColor;
//void
//main()
//{
// outputColor = color;
//}
// Under Construction...
#version 330
precision mediump float;
in vec4 color;
in vec3 normalInterp;
in vec3 vertPos
const vec3 lightPos = vec3(1.0,1.0,1.0);
const vec3 ambientColor = vec3(0.1, 0.0, 0.0);
const vec3 diffuseColor = vec3(0.5, 0.0, 0.0);
const vec3 specColor = vec3(1.0, 1.0, 1.0);
void main() {
vec3 normal = normalize(normalInterp);
vec3 lightDir = normalize(lightPos - vertPos);
float lambertian = max(dot(lightDir,normal), 0.0);
float specular = 0.0;
if(lambertian > 0.0) {
vec3 viewDir = normalize(-vertPos);
// this is blinn phong
vec3 halfDir = normalize(lightDir + viewDir);
float specAngle = max(dot(halfDir, normal), 0.0);
specular = pow(specAngle, 16.0);
}
gl_FragColor = vec4(ambientColor +
lambertian * diffuseColor +
specular * specColor, 1.0);
}
The color output in the frag shader is confusing since i dont know what to do with it as i am doing the color & lighting to gl_FragColor.
But somehow, the GetAttribute and GetUniform come up empty with the attribute count and uniform count as 0.
Any idea why?
Also, I plan to make the lightPos as an input variable so that I can update it to the camera position and direction from camera lens onto the object so however the camera moves, whatever faces the camera is always lit.
DeleteSuggestions or advices or corrections will be gratefully welcomed.
The c# code is what i learned from your tutorials and i made some modifications for different types of modes based on your suggestions.
I have tried doing GL.BindBuffer, BufferData & VertexAttribPointer routine to set the light position, tried to set it as a Uniform vec3 variable with the GL.Uniform1 but as I mentioned above, the attribs and uniforms are empty during shader loading.
I just tested your shader, and the only issue I found was a missing semicolon after vertPos is defined in the fragment shader (there should've been a message in the console mentioning the error). Honestly, my shaders are too WIP compared to what you have, to be worth sending.
Deletegl_FragColor has been removed as of OpenGL 3.1, so it's best to use an output variable for that. All you'd need to do is adding an output variable (like outputColor was in the original), and set its value to the color you want. The shaders will still run fine without the change on most hardware, though.
Making lightPos an input variable is the right way to do it. Making it a uniform is the correct approach as well (since the lighting won't be different for each vertex).
Thanks for that.
DeleteBut still, something certainly is off...
I tried to do the draw vertex by vertex immediate mode without any kind of shaders, directly using GL.Light() and GL.Material() functions. My friend who is also learning openTK did something similar.
Somehow, his viewer renders the obj properly while mine looks flat. I compared our code, and our workflow seems almost similar. Right now,both of us are not using shaders and both are using immediate mode. The problem is not with hardware as when i run his exe on my system, it renders properly, while my exe doesn't on both of our systems...
I cannot understand what is going on... :(
I finally got the shading right, but without the shaders. I used lighting and recalculated the surface and vertex normals. For a flatter shading, use the surface normal, while use the vertex normals for a smoother shading.
DeleteHere are the functions for the recalculation of normals:
///
/// Calculate surface normals
/// a, b & c are defined with counterclockwise winding
/// assuming that your face vertices are defined counterclockwise,just
/// pass face.vertex[0], face.vertex[1] and face.vertex[2] in that order.
///
/// "a" is the First vertex
/// "b" is the Second vertex
/// "c" is the Third vertex
///
internal static OpenTK.Vector3 CalculateFaceNormal(OpenTK.Vector3 a, OpenTK.Vector3 b, OpenTK.Vector3 c)
{
OpenTK.Vector3 A, B;
// A
A.X = a.X - b.X;
A.Y = a.Y - b.Y;
A.Z = a.Z - b.Z;
// B
B.X = b.X - c.X;
B.Y = b.Y - c.Y;
B.Z = b.Z - c.Z;
// calculate the cross product and place the resulting vector
// into the normal
OpenTK.Vector3 faceNormal = new OpenTK.Vector3();
faceNormal.X = (A.Y * B.Z) - (A.Z * B.Y);
faceNormal.Y = (A.Z * B.X) - (A.X * B.Z);
faceNormal.Z = (A.X * B.Y) - (A.Y * B.X);
// normalize
return normalize(faceNormal);
}
///
/// Recalculate vertex normals for smooth shading.
/// Pass the vertex for which normal is to be recalculated
///
/// "v" is the volume Object which is to be shaded.
/// "vert" is the vertex for which normal is to be
/// recalculated.
///
internal static OpenTK.Vector3 CalculateVertexNormal(Volume v, OpenTK.Vector3 vert)
{
OpenTK.Vector3 vertNormal = new OpenTK.Vector3();
List fac = new List();
foreach (DL.Face f in v.faces)
{
bool add = false;
// For the given vertex, select all faces that share this vertex.
foreach (OpenTK.Vector3 vec in f.verts)
{
if (vec == vert)
{
add = true;
break;
}
else
{
add = false;
continue;
}
}
if (add == true)
{
fac.Add(f);
add = false;
}
}
// For each face that shares this vertex, average its face normal.
foreach (DL.Face f in fac)
{
vertNormal.X += f.faceNormal.X;
vertNormal.Y += f.faceNormal.Y;
vertNormal.Z += f.faceNormal.Z;
}
// normalize
return normalize(vertNormal);
}
and the function to normalize as GL Normalize is more resource expensive
// This same function is used to normalize both the faceNormal and vertNormal.
internal static OpenTK.Vector3 normalize(OpenTK.Vector3 normal)
{
// calculate the length of the vector
float len = (float)(Math.Sqrt((normal.X * normal.X) + (normal.Y * normal.Y) + (normal.Z * normal.Z)));
// avoid division by 0
if (len == 0.0f)
len = 1.0f;
// reduce to unit size
normal.X /= len;
normal.Y /= len;
normal.Z /= len;
return normal;
}
Use faceNormal for flat shading and the vertNormal for smooth shading.
Yo man, using this how would I render 3 cubes the same way i get an error if I just do objects[3] with the same functions.
ReplyDeleteYou should just need to add another cube to the objects List to draw a third cube. objects[3] would cause an exception either way because C# uses a zero-based numbering system, so objects[2] is the third entry in the list.
DeleteThanks for the tutorials. They are really helpful. Could you please add more details on texture mapping? It is unclear why some of the texture coordinates are negative.
ReplyDeleteThe negative texture coordinates are to keep the texture facing the same way on each side. It's just a side effect of the vertex order I used.
DeleteWhen I use my own image the program breaks do you know why? It's still a png
ReplyDeleteCan you please post the image you used and any other modifications to the code you made?
DeleteYour tutorials are really awesome! I've made a SquarePyramid class inherited from Volume and i can put texture only to the square part of the pyramid, the upper triangles get the texture options from the square texture, no matter how many texture coordinates i give.
ReplyDeleteCan you give me any tips where should i search the problem?
I found it. I declared badly the GetVerts() method.
DeleteBut I found another problem. I have 16 vertices: 4 for square, 4x3 for triangles, but I need 20 textcoords to draw textures to each sides. But it draws only 4 sides, i think it detects only 16 textcoords.
Hi,
DeleteCan you upload your class to somewhere like Pastebin so I can take a look at it?
http://pastebin.com/k9zA7pNa
DeleteThe last 4 Vector2 does not work in GetTextureCoords(), that triangle is dark.
Double check your texture coordinates. You should have one for each vertex, but in the code there, they seem to be set up for quadrilateral faces on all the sides (3 vertices/coordinates per triangular side).
DeleteI keep getting the error
ReplyDeleteAn unhandled exception of type 'System.ArgumentException' occurred in mscorlib.dll
on this line
shaders.Add("default", new ShaderProgram("vs.glsl", "fs.glsl", true));
This could most likely be a result of the file not automatically creating itself when the program is initially ran. If you click the vs.glgl or/and fs.glsl file in your solution explorer, and have the properties window open, you should be able to see "Copy to output directory" as one of the settings, make sure this is set to either "Copy always" or "Copy if newer".
DeleteI keep getting a error on this error
ReplyDeleteAn unhandled exception of type 'System.AccessViolationException' occurred in OpenTK.dll
On this line
GL.DrawElements(BeginMode.Triangles, v.IndiceCount, DrawElementsType.UnsignedInt, indiceat * sizeof(uint));
can anyone tell me why
Thanks
How do I draw a textured cube AND a blue cube with no texture?
ReplyDeleteYou can do this by changing the shader program between drawing each one. This can also be useful for things like a GUI, which you would want to have unlit.
DeleteIf you want to see this in action, look at the sprites example. It has a multi-shader mode where different shapes are drawn with different shaders in the same scene.
Hello.
ReplyDeleteFirst, thanks for the tutorial, it's quite good to begin with OpenTK. At least I find it so.
I've a question. The Doctor leading our Computer Graphics subject insists on putting model, view, and projection matrixes into the vertex shader, rather than precalculating it in C# part of code the way you do so. How would one do that, because I don't necesserily feel that by now?
Thanks in advance.
Hi Schwalbe,
DeleteYou would just need to change the shader to have 3 matrices passed into it (with code to pass them to the shader like the current combined matrix). Then you can just multiply them in the shader like you would in C#.
However, this is probably a little less efficient most of the time.
Hey Kabuto,
ReplyDeleteI've read the tutorials and get these things nicely done. (Rendered a textured cube)
Got a question though, about using 2 or more shader programs in a single game window.
(for simple example using the tutorial, I want to render both untextured colorcube and textured cube)
Then what codes should I modify/add in order to do so?
I'll get my toy codes which is based on this tutorial posted, if required.
Thanks in advance.
Hello,
DeleteYou can definitely use two shaders in one program. You just need to use GL.UseProgram to change the shader between drawing the shapes. You need to make sure the required data has been sent to the graphics card beforehand. Another option would be to create a shader capable of drawing both types of cube based on some value you pass to it.
Thx for the response Kabuto. May I ask a little bit more?
DeleteSuppose that I am generating a Render/Update function for each type,
i.e. making
OnRenderFrame(e)
{
base.OnRenderFrame(e);
... // ViewPort, Clear, and such
// Render for each kind
RenderFrameColorCube(e);
RenderFrameTextureCube(e);
GL.Flush();
SwapBuffers();
}
and
OnUpdateFrame(e)
{
base.OnUpdateFrame(e);
UpdateFrameColorCube(e);
UpdateFrameTextureCube(e);
}
then where should I cast each GL.UseProgram?
This comment has been removed by the author.
ReplyDeleteif (shaders[activeShader].GetAttribute("maintexture") != -1)
ReplyDeleteWhy GetAttribute and not GetUniform?
Hi McThrok,
DeleteI recently had this pointed out to me. It's a typo that doesn't have an effect on the code (since it was by default using the zero-th texture). It's fixed on GitHub, but I haven't had time to fix it in the text of the tutorials yet.
The updated code is now in the tutorial. Thanks for reminding me.
DeleteHi, first of all, thanks for the tutorial, it was very helpfull!
ReplyDeleteHowever, I've had a few issues I haven't been able to fix:
1) I keep getting an access violation error on the GL.DrawElements line (updating drivers didn't fix it)
2) The textures aren't mapped correctly: https://i.imgur.com/c5u3fEi.png
Do you know what's the issue?